Test Report: Docker_macOS 12739

                    
                      41d04d1976fcad0b0b824d850ee7b8db3632a01b:2021-11-17:21389
                    
                

Test fail (156/236)

Order failed test Duration
4 TestDownloadOnly/v1.14.0/preload-exists 0.18
26 TestOffline 43.7
28 TestAddons/Setup 45.87
29 TestCertOptions 53.84
30 TestCertExpiration 317.49
31 TestDockerFlags 57.36
32 TestForceSystemdFlag 62.7
33 TestForceSystemdEnv 63.17
38 TestErrorSpam/setup 44.86
47 TestFunctional/serial/StartWithProxy 45.65
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 68.86
50 TestFunctional/serial/KubeContext 0.29
51 TestFunctional/serial/KubectlGetPods 0.29
54 TestFunctional/serial/CacheCmd/cache/add_remote 0.3
56 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.09
57 TestFunctional/serial/CacheCmd/cache/list 0.07
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.66
60 TestFunctional/serial/CacheCmd/cache/delete 0.19
61 TestFunctional/serial/MinikubeKubectlCmd 0.68
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.76
63 TestFunctional/serial/ExtraConfig 68.89
64 TestFunctional/serial/ComponentHealth 0.28
65 TestFunctional/serial/LogsCmd 0.42
66 TestFunctional/serial/LogsFileCmd 0.4
69 TestFunctional/parallel/DashboardCmd 0.54
72 TestFunctional/parallel/StatusCmd 0.68
75 TestFunctional/parallel/ServiceCmd 0.41
77 TestFunctional/parallel/PersistentVolumeClaim 0.25
79 TestFunctional/parallel/SSHCmd 0.72
80 TestFunctional/parallel/CpCmd 0.49
81 TestFunctional/parallel/MySQL 0.31
82 TestFunctional/parallel/FileSync 0.7
83 TestFunctional/parallel/CertSync 1.45
87 TestFunctional/parallel/NodeLabels 0.3
89 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
92 TestFunctional/parallel/Version/components 0.2
93 TestFunctional/parallel/ImageCommands/ImageList 0.17
94 TestFunctional/parallel/ImageCommands/ImageBuild 0.52
96 TestFunctional/parallel/DockerEnv/bash 0.2
97 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
98 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.45
99 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
100 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.69
104 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.22
106 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.37
110 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
111 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 115.89
124 TestIngressAddonLegacy/StartLegacyK8sCluster 48.38
126 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 1.04
128 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.24
131 TestJSONOutput/start/Command 44.53
132 TestJSONOutput/start/Audit 0
134 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
135 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
137 TestJSONOutput/pause/Command 0.18
138 TestJSONOutput/pause/Audit 0
143 TestJSONOutput/unpause/Command 0.55
144 TestJSONOutput/unpause/Audit 0
149 TestJSONOutput/stop/Command 14.68
150 TestJSONOutput/stop/Audit 0
152 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
156 TestKicCustomNetwork/create_custom_network 95.4
162 TestMountStart/serial/StartWithMountFirst 45.88
163 TestMountStart/serial/StartWithMountSecond 45.72
164 TestMountStart/serial/VerifyMountFirst 0.64
165 TestMountStart/serial/VerifyMountSecond 0.45
167 TestMountStart/serial/VerifyMountPostDelete 0.46
168 TestMountStart/serial/Stop 14.98
169 TestMountStart/serial/RestartStopped 66.5
170 TestMountStart/serial/VerifyMountPostStop 0.48
173 TestMultiNode/serial/FreshStart2Nodes 45.68
174 TestMultiNode/serial/DeployApp2Nodes 0.77
175 TestMultiNode/serial/PingHostFrom2Pods 0.33
176 TestMultiNode/serial/AddNode 0.47
177 TestMultiNode/serial/ProfileList 0.56
178 TestMultiNode/serial/CopyFile 0.4
179 TestMultiNode/serial/StopNode 0.65
180 TestMultiNode/serial/StartAfterStop 0.59
181 TestMultiNode/serial/RestartKeepsNodes 84.04
182 TestMultiNode/serial/DeleteNode 0.73
183 TestMultiNode/serial/StopMultiNode 15.31
184 TestMultiNode/serial/RestartMultiNode 69.64
185 TestMultiNode/serial/ValidateNameConflict 102.13
189 TestPreload 48.91
191 TestScheduledStopUnix 50.01
192 TestSkaffold 51.58
194 TestInsufficientStorage 13.05
197 TestKubernetesUpgrade 66.73
198 TestMissingContainerUpgrade 165.75
221 TestPause/serial/Start 46.33
223 TestNoKubernetes/serial/Start 58.5
224 TestPause/serial/SecondStartNoReconfiguration 74.74
227 TestNoKubernetes/serial/Stop 14.97
228 TestNoKubernetes/serial/StartNoArgs 76.32
229 TestPause/serial/Pause 0.84
230 TestPause/serial/VerifyStatus 0.39
231 TestPause/serial/Unpause 0.78
232 TestPause/serial/PauseAgain 0.71
238 TestNetworkPlugins/group/auto/Start 47.65
239 TestNetworkPlugins/group/false/Start 55.14
240 TestNetworkPlugins/group/cilium/Start 49.94
241 TestNetworkPlugins/group/calico/Start 48.97
242 TestNetworkPlugins/group/custom-weave/Start 49.65
243 TestNetworkPlugins/group/enable-default-cni/Start 50.82
244 TestNetworkPlugins/group/kindnet/Start 50.5
245 TestNetworkPlugins/group/bridge/Start 49.26
246 TestNetworkPlugins/group/kubenet/Start 49.08
248 TestStartStop/group/old-k8s-version/serial/FirstStart 46.35
250 TestStartStop/group/no-preload/serial/FirstStart 49.37
251 TestStartStop/group/old-k8s-version/serial/DeployApp 0.67
252 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.55
253 TestStartStop/group/old-k8s-version/serial/Stop 14.99
254 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.62
255 TestStartStop/group/old-k8s-version/serial/SecondStart 77.3
256 TestStartStop/group/no-preload/serial/DeployApp 0.54
257 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.48
258 TestStartStop/group/no-preload/serial/Stop 15.08
259 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.78
260 TestStartStop/group/no-preload/serial/SecondStart 76.79
261 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.26
262 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.29
263 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.46
264 TestStartStop/group/old-k8s-version/serial/Pause 0.71
266 TestStartStop/group/default-k8s-different-port/serial/FirstStart 53.24
267 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.25
268 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.3
269 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1.5
270 TestStartStop/group/no-preload/serial/Pause 0.71
272 TestStartStop/group/newest-cni/serial/FirstStart 49.74
273 TestStartStop/group/default-k8s-different-port/serial/DeployApp 0.57
274 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.56
275 TestStartStop/group/default-k8s-different-port/serial/Stop 14.97
276 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.62
277 TestStartStop/group/default-k8s-different-port/serial/SecondStart 76.71
280 TestStartStop/group/newest-cni/serial/Stop 15.03
281 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.67
282 TestStartStop/group/newest-cni/serial/SecondStart 72.48
283 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 0.26
284 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 0.29
285 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.5
286 TestStartStop/group/default-k8s-different-port/serial/Pause 0.71
288 TestStartStop/group/embed-certs/serial/FirstStart 53.71
291 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.57
292 TestStartStop/group/newest-cni/serial/Pause 0.73
293 TestStartStop/group/embed-certs/serial/DeployApp 0.55
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.5
295 TestStartStop/group/embed-certs/serial/Stop 14.99
296 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.62
297 TestStartStop/group/embed-certs/serial/SecondStart 72.26
298 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.25
299 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.29
300 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.46
301 TestStartStop/group/embed-certs/serial/Pause 0.73
x
+
TestDownloadOnly/v1.14.0/preload-exists (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
aaa_download_only_test.go:105: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.14.0/preload-exists (0.18s)

                                                
                                    
x
+
TestOffline (43.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20211117121607-2067 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-20211117121607-2067 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 80 (40.586496225s)

                                                
                                                
-- stdout --
	* [offline-docker-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node offline-docker-20211117121607-2067 in cluster offline-docker-20211117121607-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-20211117121607-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:16:07.270269    9628 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:16:07.270424    9628 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:16:07.270429    9628 out.go:310] Setting ErrFile to fd 2...
	I1117 12:16:07.270433    9628 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:16:07.270514    9628 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:16:07.270829    9628 out.go:304] Setting JSON to false
	I1117 12:16:07.297436    9628 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":2742,"bootTime":1637177425,"procs":334,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:16:07.297569    9628 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:16:07.324770    9628 out.go:176] * [offline-docker-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:16:07.325001    9628 notify.go:174] Checking for updates...
	I1117 12:16:07.372201    9628 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:16:07.398356    9628 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:16:07.424322    9628 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:16:07.450120    9628 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:16:07.450570    9628 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:16:07.450615    9628 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:16:07.550824    9628 docker.go:132] docker version: linux-20.10.5
	I1117 12:16:07.550946    9628 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:16:07.721460    9628 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:16:07.661212964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:16:07.748754    9628 out.go:176] * Using the docker driver based on user configuration
	I1117 12:16:07.748796    9628 start.go:280] selected driver: docker
	I1117 12:16:07.748807    9628 start.go:775] validating driver "docker" against <nil>
	I1117 12:16:07.748834    9628 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:16:07.752352    9628 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:16:07.916657    9628 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:16:07.857446895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:16:07.916784    9628 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:16:07.916969    9628 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:16:07.916988    9628 cni.go:93] Creating CNI manager for ""
	I1117 12:16:07.917011    9628 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:16:07.917038    9628 start_flags.go:282] config:
	{Name:offline-docker-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:offline-docker-20211117121607-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:16:07.944015    9628 out.go:176] * Starting control plane node offline-docker-20211117121607-2067 in cluster offline-docker-20211117121607-2067
	I1117 12:16:07.944068    9628 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:16:07.969697    9628 out.go:176] * Pulling base image ...
	I1117 12:16:07.969731    9628 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:16:07.969765    9628 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:16:07.969802    9628 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:16:07.969827    9628 cache.go:57] Caching tarball of preloaded images
	I1117 12:16:07.969960    9628 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:16:07.969968    9628 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:16:07.970543    9628 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/offline-docker-20211117121607-2067/config.json ...
	I1117 12:16:07.970625    9628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/offline-docker-20211117121607-2067/config.json: {Name:mkdd45c254ecd0b532211e9227d23da27ab8f3d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:16:08.087929    9628 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:16:08.087961    9628 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:16:08.087974    9628 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:16:08.088028    9628 start.go:313] acquiring machines lock for offline-docker-20211117121607-2067: {Name:mk1a613a62adb192869d05e4c8375ad3688c8c8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:16:08.088187    9628 start.go:317] acquired machines lock for "offline-docker-20211117121607-2067" in 146.691µs
	I1117 12:16:08.088229    9628 start.go:89] Provisioning new machine with config: &{Name:offline-docker-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:offline-docker-20211117121607-2067 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:16:08.088335    9628 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:16:08.115395    9628 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:16:08.115801    9628 start.go:160] libmachine.API.Create for "offline-docker-20211117121607-2067" (driver="docker")
	I1117 12:16:08.115858    9628 client.go:168] LocalClient.Create starting
	I1117 12:16:08.116044    9628 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:16:08.136888    9628 main.go:130] libmachine: Decoding PEM data...
	I1117 12:16:08.136931    9628 main.go:130] libmachine: Parsing certificate...
	I1117 12:16:08.137017    9628 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:16:08.137062    9628 main.go:130] libmachine: Decoding PEM data...
	I1117 12:16:08.137073    9628 main.go:130] libmachine: Parsing certificate...
	I1117 12:16:08.137699    9628 cli_runner.go:115] Run: docker network inspect offline-docker-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:16:08.240894    9628 cli_runner.go:162] docker network inspect offline-docker-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:16:08.241000    9628 network_create.go:254] running [docker network inspect offline-docker-20211117121607-2067] to gather additional debugging logs...
	I1117 12:16:08.241014    9628 cli_runner.go:115] Run: docker network inspect offline-docker-20211117121607-2067
	W1117 12:16:08.349276    9628 cli_runner.go:162] docker network inspect offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:08.349298    9628 network_create.go:257] error running [docker network inspect offline-docker-20211117121607-2067]: docker network inspect offline-docker-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20211117121607-2067
	I1117 12:16:08.349317    9628 network_create.go:259] output of [docker network inspect offline-docker-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20211117121607-2067
	
	** /stderr **
	I1117 12:16:08.349426    9628 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:16:08.458807    9628 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000112478] misses:0}
	I1117 12:16:08.458847    9628 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:16:08.458874    9628 network_create.go:106] attempt to create docker network offline-docker-20211117121607-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:16:08.458971    9628 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117121607-2067
	I1117 12:16:12.405907    9628 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117121607-2067: (3.945400638s)
	I1117 12:16:12.405929    9628 network_create.go:90] docker network offline-docker-20211117121607-2067 192.168.49.0/24 created
	I1117 12:16:12.405948    9628 kic.go:106] calculated static IP "192.168.49.2" for the "offline-docker-20211117121607-2067" container
	I1117 12:16:12.406046    9628 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:16:12.547927    9628 cli_runner.go:115] Run: docker volume create offline-docker-20211117121607-2067 --label name.minikube.sigs.k8s.io=offline-docker-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:16:12.659681    9628 oci.go:102] Successfully created a docker volume offline-docker-20211117121607-2067
	I1117 12:16:12.659788    9628 cli_runner.go:115] Run: docker run --rm --name offline-docker-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20211117121607-2067 --entrypoint /usr/bin/test -v offline-docker-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:16:13.228747    9628 oci.go:106] Successfully prepared a docker volume offline-docker-20211117121607-2067
	E1117 12:16:13.228807    9628 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:16:13.228815    9628 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:16:13.228829    9628 client.go:171] LocalClient.Create took 5.111041512s
	I1117 12:16:13.228842    9628 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:16:13.228944    9628 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:16:15.237782    9628 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:16:15.237879    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:15.377765    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:15.377870    9628 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:15.654425    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:15.774157    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:15.774241    9628 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:16.315088    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:16.433208    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:16.433285    9628 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:17.092909    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:17.219468    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	W1117 12:16:17.219567    9628 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	
	W1117 12:16:17.219593    9628 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:17.219606    9628 start.go:129] duration metric: createHost completed in 9.12820389s
	I1117 12:16:17.219614    9628 start.go:80] releasing machines lock for "offline-docker-20211117121607-2067", held for 9.128379876s
	W1117 12:16:17.219631    9628 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:16:17.220202    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:17.359812    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:17.359864    9628 delete.go:82] Unable to get host status for offline-docker-20211117121607-2067, assuming it has already been deleted: state: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	W1117 12:16:17.360015    9628 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:16:17.360030    9628 start.go:547] Will try again in 5 seconds ...
	I1117 12:16:18.723286    9628 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.492844385s)
	I1117 12:16:18.723331    9628 kic.go:188] duration metric: took 5.493033 seconds to extract preloaded images to volume
	I1117 12:16:22.361230    9628 start.go:313] acquiring machines lock for offline-docker-20211117121607-2067: {Name:mk1a613a62adb192869d05e4c8375ad3688c8c8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:16:22.361349    9628 start.go:317] acquired machines lock for "offline-docker-20211117121607-2067" in 97.487µs
	I1117 12:16:22.361378    9628 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:16:22.361385    9628 fix.go:55] fixHost starting: 
	I1117 12:16:22.361670    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:22.483446    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:22.483547    9628 fix.go:108] recreateIfNeeded on offline-docker-20211117121607-2067: state= err=unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:22.483572    9628 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:16:22.531093    9628 out.go:176] * docker "offline-docker-20211117121607-2067" container is missing, will recreate.
	I1117 12:16:22.531127    9628 delete.go:124] DEMOLISHING offline-docker-20211117121607-2067 ...
	I1117 12:16:22.531263    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:22.664428    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:16:22.664477    9628 stop.go:75] unable to get state: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:22.664501    9628 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:22.664967    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:22.787929    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:22.787978    9628 delete.go:82] Unable to get host status for offline-docker-20211117121607-2067, assuming it has already been deleted: state: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:22.788098    9628 cli_runner.go:115] Run: docker container inspect -f {{.Id}} offline-docker-20211117121607-2067
	W1117 12:16:22.911651    9628 cli_runner.go:162] docker container inspect -f {{.Id}} offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:22.911686    9628 kic.go:360] could not find the container offline-docker-20211117121607-2067 to remove it. will try anyways
	I1117 12:16:22.911815    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:23.038408    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:16:23.038462    9628 oci.go:83] error getting container status, will try to delete anyways: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:23.038601    9628 cli_runner.go:115] Run: docker exec --privileged -t offline-docker-20211117121607-2067 /bin/bash -c "sudo init 0"
	W1117 12:16:23.162030    9628 cli_runner.go:162] docker exec --privileged -t offline-docker-20211117121607-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:16:23.162057    9628 oci.go:656] error shutdown offline-docker-20211117121607-2067: docker exec --privileged -t offline-docker-20211117121607-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:24.162495    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:24.294949    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:24.295009    9628 oci.go:668] temporary error verifying shutdown: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:24.295029    9628 oci.go:670] temporary error: container offline-docker-20211117121607-2067 status is  but expect it to be exited
	I1117 12:16:24.295063    9628 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:24.757691    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:24.879776    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:24.879822    9628 oci.go:668] temporary error verifying shutdown: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:24.879832    9628 oci.go:670] temporary error: container offline-docker-20211117121607-2067 status is  but expect it to be exited
	I1117 12:16:24.879856    9628 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:25.774739    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:25.885104    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:25.885144    9628 oci.go:668] temporary error verifying shutdown: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:25.885153    9628 oci.go:670] temporary error: container offline-docker-20211117121607-2067 status is  but expect it to be exited
	I1117 12:16:25.885175    9628 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:26.531419    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:26.640888    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:26.640936    9628 oci.go:668] temporary error verifying shutdown: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:26.640945    9628 oci.go:670] temporary error: container offline-docker-20211117121607-2067 status is  but expect it to be exited
	I1117 12:16:26.640972    9628 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:27.756644    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:27.859477    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:27.859516    9628 oci.go:668] temporary error verifying shutdown: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:27.859525    9628 oci.go:670] temporary error: container offline-docker-20211117121607-2067 status is  but expect it to be exited
	I1117 12:16:27.859547    9628 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:29.371514    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:29.474339    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:29.474378    9628 oci.go:668] temporary error verifying shutdown: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:29.474388    9628 oci.go:670] temporary error: container offline-docker-20211117121607-2067 status is  but expect it to be exited
	I1117 12:16:29.474416    9628 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:32.518546    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:32.620749    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:32.620795    9628 oci.go:668] temporary error verifying shutdown: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:32.620806    9628 oci.go:670] temporary error: container offline-docker-20211117121607-2067 status is  but expect it to be exited
	I1117 12:16:32.620834    9628 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:38.407630    9628 cli_runner.go:115] Run: docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}
	W1117 12:16:38.509094    9628 cli_runner.go:162] docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:16:38.509143    9628 oci.go:668] temporary error verifying shutdown: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:38.509152    9628 oci.go:670] temporary error: container offline-docker-20211117121607-2067 status is  but expect it to be exited
	I1117 12:16:38.509199    9628 oci.go:87] couldn't shut down offline-docker-20211117121607-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	 
	I1117 12:16:38.509285    9628 cli_runner.go:115] Run: docker rm -f -v offline-docker-20211117121607-2067
	I1117 12:16:38.609896    9628 cli_runner.go:115] Run: docker container inspect -f {{.Id}} offline-docker-20211117121607-2067
	W1117 12:16:38.710398    9628 cli_runner.go:162] docker container inspect -f {{.Id}} offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:38.710518    9628 cli_runner.go:115] Run: docker network inspect offline-docker-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:16:38.813183    9628 cli_runner.go:115] Run: docker network rm offline-docker-20211117121607-2067
	W1117 12:16:39.468944    9628 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:16:39.468964    9628 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:16:40.469093    9628 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:16:40.496036    9628 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:16:40.496187    9628 start.go:160] libmachine.API.Create for "offline-docker-20211117121607-2067" (driver="docker")
	I1117 12:16:40.496218    9628 client.go:168] LocalClient.Create starting
	I1117 12:16:40.496373    9628 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:16:40.496472    9628 main.go:130] libmachine: Decoding PEM data...
	I1117 12:16:40.496495    9628 main.go:130] libmachine: Parsing certificate...
	I1117 12:16:40.496584    9628 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:16:40.496639    9628 main.go:130] libmachine: Decoding PEM data...
	I1117 12:16:40.496657    9628 main.go:130] libmachine: Parsing certificate...
	I1117 12:16:40.497289    9628 cli_runner.go:115] Run: docker network inspect offline-docker-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:16:40.597985    9628 cli_runner.go:162] docker network inspect offline-docker-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:16:40.598094    9628 network_create.go:254] running [docker network inspect offline-docker-20211117121607-2067] to gather additional debugging logs...
	I1117 12:16:40.598111    9628 cli_runner.go:115] Run: docker network inspect offline-docker-20211117121607-2067
	W1117 12:16:40.699337    9628 cli_runner.go:162] docker network inspect offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:40.699362    9628 network_create.go:257] error running [docker network inspect offline-docker-20211117121607-2067]: docker network inspect offline-docker-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20211117121607-2067
	I1117 12:16:40.699374    9628 network_create.go:259] output of [docker network inspect offline-docker-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20211117121607-2067
	
	** /stderr **
	I1117 12:16:40.699459    9628 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:16:40.802444    9628 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000112478] amended:false}} dirty:map[] misses:0}
	I1117 12:16:40.802488    9628 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:16:40.802675    9628 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000112478] amended:true}} dirty:map[192.168.49.0:0xc000112478 192.168.58.0:0xc0001181a0] misses:0}
	I1117 12:16:40.802690    9628 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:16:40.802703    9628 network_create.go:106] attempt to create docker network offline-docker-20211117121607-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:16:40.802788    9628 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117121607-2067
	I1117 12:16:41.898139    9628 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117121607-2067: (1.095235047s)
	I1117 12:16:41.898172    9628 network_create.go:90] docker network offline-docker-20211117121607-2067 192.168.58.0/24 created
	I1117 12:16:41.898187    9628 kic.go:106] calculated static IP "192.168.58.2" for the "offline-docker-20211117121607-2067" container
	I1117 12:16:41.898299    9628 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:16:41.999484    9628 cli_runner.go:115] Run: docker volume create offline-docker-20211117121607-2067 --label name.minikube.sigs.k8s.io=offline-docker-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:16:42.101283    9628 oci.go:102] Successfully created a docker volume offline-docker-20211117121607-2067
	I1117 12:16:42.101448    9628 cli_runner.go:115] Run: docker run --rm --name offline-docker-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20211117121607-2067 --entrypoint /usr/bin/test -v offline-docker-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:16:42.500242    9628 oci.go:106] Successfully prepared a docker volume offline-docker-20211117121607-2067
	E1117 12:16:42.500291    9628 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:16:42.500303    9628 client.go:171] LocalClient.Create took 2.003987462s
	I1117 12:16:42.500310    9628 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:16:42.500330    9628 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:16:42.500476    9628 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:16:44.507904    9628 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:16:44.508078    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:44.646344    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:44.646533    9628 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:44.832775    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:44.977544    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:44.977674    9628 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:45.308863    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:45.438873    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:45.438955    9628 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:45.907836    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:46.028175    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	W1117 12:16:46.028280    9628 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	
	W1117 12:16:46.028319    9628 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:46.028335    9628 start.go:129] duration metric: createHost completed in 5.559003034s
	I1117 12:16:46.028421    9628 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:16:46.028502    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:46.142618    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:46.142713    9628 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:46.341345    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:46.459546    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:46.459640    9628 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:46.757921    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:46.871138    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	I1117 12:16:46.871219    9628 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:47.541437    9628 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067
	W1117 12:16:47.669440    9628 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067 returned with exit code 1
	W1117 12:16:47.669540    9628 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	
	W1117 12:16:47.669554    9628 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067
	I1117 12:16:47.669566    9628 fix.go:57] fixHost completed within 25.306108179s
	I1117 12:16:47.669577    9628 start.go:80] releasing machines lock for "offline-docker-20211117121607-2067", held for 25.306148514s
	W1117 12:16:47.669732    9628 out.go:241] * Failed to start docker container. Running "minikube delete -p offline-docker-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p offline-docker-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:16:47.718231    9628 out.go:176] 
	W1117 12:16:47.718372    9628 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:16:47.718382    9628 out.go:241] * 
	* 
	W1117 12:16:47.719264    9628 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:16:47.797295    9628 out.go:176] 

                                                
                                                
** /stderr **
aab_offline_test.go:59: out/minikube-darwin-amd64 start -p offline-docker-20211117121607-2067 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 80
panic.go:642: *** TestOffline FAILED at 2021-11-17 12:16:47.827829 -0800 PST m=+1603.557489593
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-20211117121607-2067
helpers_test.go:235: (dbg) docker inspect offline-docker-20211117121607-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-20211117121607-2067",
	        "Id": "e38ff8ec4623d6863ab4e321a302ec3933c61f2ef684e106a8789cfe3f7f4424",
	        "Created": "2021-11-17T20:16:40.916538689Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-20211117121607-2067 -n offline-docker-20211117121607-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-20211117121607-2067 -n offline-docker-20211117121607-2067: exit status 7 (179.783305ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:16:48.136235   10022 status.go:247] status error: host: state: unknown state "offline-docker-20211117121607-2067": docker container inspect offline-docker-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117121607-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-20211117121607-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-20211117121607-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20211117121607-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20211117121607-2067: (2.789453744s)
--- FAIL: TestOffline (43.70s)

                                                
                                    
x
+
TestAddons/Setup (45.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20211117115052-2067 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p addons-20211117115052-2067 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 80 (45.866261912s)

                                                
                                                
-- stdout --
	* [addons-20211117115052-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node addons-20211117115052-2067 in cluster addons-20211117115052-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "addons-20211117115052-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:50:52.692682    2329 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:50:52.692878    2329 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:50:52.692883    2329 out.go:310] Setting ErrFile to fd 2...
	I1117 11:50:52.692886    2329 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:50:52.692972    2329 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:50:52.693299    2329 out.go:304] Setting JSON to false
	I1117 11:50:52.717083    2329 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1227,"bootTime":1637177425,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 11:50:52.717181    2329 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 11:50:52.744471    2329 out.go:176] * [addons-20211117115052-2067] minikube v1.24.0 on Darwin 11.1
	I1117 11:50:52.744684    2329 notify.go:174] Checking for updates...
	I1117 11:50:52.770664    2329 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 11:50:52.797089    2329 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 11:50:52.822849    2329 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 11:50:52.848631    2329 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 11:50:52.848960    2329 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 11:50:52.934927    2329 docker.go:132] docker version: linux-20.10.5
	I1117 11:50:52.935047    2329 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:50:53.080117    2329 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-11-17 19:50:53.032089293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:50:53.128921    2329 out.go:176] * Using the docker driver based on user configuration
	I1117 11:50:53.129028    2329 start.go:280] selected driver: docker
	I1117 11:50:53.129044    2329 start.go:775] validating driver "docker" against <nil>
	I1117 11:50:53.129064    2329 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 11:50:53.132375    2329 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:50:53.279308    2329 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-11-17 19:50:53.228418456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:50:53.279404    2329 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 11:50:53.279553    2329 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 11:50:53.279569    2329 cni.go:93] Creating CNI manager for ""
	I1117 11:50:53.279575    2329 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 11:50:53.279580    2329 start_flags.go:282] config:
	{Name:addons-20211117115052-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:addons-20211117115052-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:50:53.306316    2329 out.go:176] * Starting control plane node addons-20211117115052-2067 in cluster addons-20211117115052-2067
	I1117 11:50:53.306379    2329 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 11:50:53.376063    2329 out.go:176] * Pulling base image ...
	I1117 11:50:53.376172    2329 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:50:53.376257    2329 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 11:50:53.376259    2329 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 11:50:53.376284    2329 cache.go:57] Caching tarball of preloaded images
	I1117 11:50:53.376507    2329 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 11:50:53.376531    2329 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 11:50:53.378703    2329 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/addons-20211117115052-2067/config.json ...
	I1117 11:50:53.378874    2329 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/addons-20211117115052-2067/config.json: {Name:mkaf70d2ce0a9e909abfcd28c1cd2c1438922049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 11:50:53.486156    2329 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 11:50:53.486174    2329 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 11:50:53.486186    2329 cache.go:206] Successfully downloaded all kic artifacts
	I1117 11:50:53.486227    2329 start.go:313] acquiring machines lock for addons-20211117115052-2067: {Name:mkcab6af672d97a027c310db96ebb41daa3288ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:53.486369    2329 start.go:317] acquired machines lock for "addons-20211117115052-2067" in 130.94µs
	I1117 11:50:53.486395    2329 start.go:89] Provisioning new machine with config: &{Name:addons-20211117115052-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:addons-20211117115052-2067 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 11:50:53.486463    2329 start.go:126] createHost starting for "" (driver="docker")
	I1117 11:50:53.534158    2329 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 11:50:53.534504    2329 start.go:160] libmachine.API.Create for "addons-20211117115052-2067" (driver="docker")
	I1117 11:50:53.534549    2329 client.go:168] LocalClient.Create starting
	I1117 11:50:53.534842    2329 main.go:130] libmachine: Creating CA: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 11:50:53.628436    2329 main.go:130] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 11:50:53.763389    2329 cli_runner.go:115] Run: docker network inspect addons-20211117115052-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 11:50:53.866394    2329 cli_runner.go:162] docker network inspect addons-20211117115052-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 11:50:53.866508    2329 network_create.go:254] running [docker network inspect addons-20211117115052-2067] to gather additional debugging logs...
	I1117 11:50:53.866527    2329 cli_runner.go:115] Run: docker network inspect addons-20211117115052-2067
	W1117 11:50:53.962079    2329 cli_runner.go:162] docker network inspect addons-20211117115052-2067 returned with exit code 1
	I1117 11:50:53.962105    2329 network_create.go:257] error running [docker network inspect addons-20211117115052-2067]: docker network inspect addons-20211117115052-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20211117115052-2067
	I1117 11:50:53.962121    2329 network_create.go:259] output of [docker network inspect addons-20211117115052-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20211117115052-2067
	
	** /stderr **
	I1117 11:50:53.962227    2329 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:50:54.061716    2329 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00012c178] misses:0}
	I1117 11:50:54.061759    2329 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:50:54.061782    2329 network_create.go:106] attempt to create docker network addons-20211117115052-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 11:50:54.061882    2329 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117115052-2067
	I1117 11:50:58.151075    2329 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117115052-2067: (4.089174726s)
	I1117 11:50:58.151098    2329 network_create.go:90] docker network addons-20211117115052-2067 192.168.49.0/24 created
	I1117 11:50:58.151112    2329 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20211117115052-2067" container
	I1117 11:50:58.151235    2329 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 11:50:58.248941    2329 cli_runner.go:115] Run: docker volume create addons-20211117115052-2067 --label name.minikube.sigs.k8s.io=addons-20211117115052-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 11:50:58.346725    2329 oci.go:102] Successfully created a docker volume addons-20211117115052-2067
	I1117 11:50:58.346885    2329 cli_runner.go:115] Run: docker run --rm --name addons-20211117115052-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211117115052-2067 --entrypoint /usr/bin/test -v addons-20211117115052-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 11:50:59.032553    2329 oci.go:106] Successfully prepared a docker volume addons-20211117115052-2067
	E1117 11:50:59.032619    2329 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 11:50:59.032630    2329 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:50:59.032644    2329 client.go:171] LocalClient.Create took 5.49812616s
	I1117 11:50:59.032653    2329 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 11:50:59.032764    2329 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117115052-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 11:51:01.032920    2329 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:51:01.033068    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:01.166312    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:01.166461    2329 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:01.443468    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:01.554850    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:01.554942    2329 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:02.104584    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:02.218718    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:02.218802    2329 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:02.875335    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:02.988090    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	W1117 11:51:02.988200    2329 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	
	W1117 11:51:02.988225    2329 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:02.988246    2329 start.go:129] duration metric: createHost completed in 9.501846736s
	I1117 11:51:02.988254    2329 start.go:80] releasing machines lock for "addons-20211117115052-2067", held for 9.501948716s
	W1117 11:51:02.988275    2329 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:51:02.988857    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:03.335212    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:03.335256    2329 delete.go:82] Unable to get host status for addons-20211117115052-2067, assuming it has already been deleted: state: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	W1117 11:51:03.335419    2329 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:51:03.335429    2329 start.go:547] Will try again in 5 seconds ...
	I1117 11:51:05.139525    2329 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117115052-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.106774297s)
	I1117 11:51:05.139540    2329 kic.go:188] duration metric: took 6.106934 seconds to extract preloaded images to volume
	I1117 11:51:08.338551    2329 start.go:313] acquiring machines lock for addons-20211117115052-2067: {Name:mkcab6af672d97a027c310db96ebb41daa3288ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:51:08.338924    2329 start.go:317] acquired machines lock for "addons-20211117115052-2067" in 338.385µs
	I1117 11:51:08.338987    2329 start.go:93] Skipping create...Using existing machine configuration
	I1117 11:51:08.339002    2329 fix.go:55] fixHost starting: 
	I1117 11:51:08.339516    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:08.446627    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:08.446680    2329 fix.go:108] recreateIfNeeded on addons-20211117115052-2067: state= err=unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:08.446700    2329 fix.go:113] machineExists: false. err=machine does not exist
	I1117 11:51:08.473582    2329 out.go:176] * docker "addons-20211117115052-2067" container is missing, will recreate.
	I1117 11:51:08.473638    2329 delete.go:124] DEMOLISHING addons-20211117115052-2067 ...
	I1117 11:51:08.473854    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:08.572406    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:51:08.572448    2329 stop.go:75] unable to get state: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:08.572462    2329 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:08.572872    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:08.669802    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:08.669846    2329 delete.go:82] Unable to get host status for addons-20211117115052-2067, assuming it has already been deleted: state: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:08.669940    2329 cli_runner.go:115] Run: docker container inspect -f {{.Id}} addons-20211117115052-2067
	W1117 11:51:08.767921    2329 cli_runner.go:162] docker container inspect -f {{.Id}} addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:08.767956    2329 kic.go:360] could not find the container addons-20211117115052-2067 to remove it. will try anyways
	I1117 11:51:08.768053    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:08.867016    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:51:08.867057    2329 oci.go:83] error getting container status, will try to delete anyways: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:08.867144    2329 cli_runner.go:115] Run: docker exec --privileged -t addons-20211117115052-2067 /bin/bash -c "sudo init 0"
	W1117 11:51:08.964447    2329 cli_runner.go:162] docker exec --privileged -t addons-20211117115052-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 11:51:08.964474    2329 oci.go:656] error shutdown addons-20211117115052-2067: docker exec --privileged -t addons-20211117115052-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:09.971646    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:10.070706    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:10.070757    2329 oci.go:668] temporary error verifying shutdown: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:10.070770    2329 oci.go:670] temporary error: container addons-20211117115052-2067 status is  but expect it to be exited
	I1117 11:51:10.070792    2329 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:10.537001    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:10.638832    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:10.638871    2329 oci.go:668] temporary error verifying shutdown: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:10.638881    2329 oci.go:670] temporary error: container addons-20211117115052-2067 status is  but expect it to be exited
	I1117 11:51:10.638900    2329 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:11.535630    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:11.635862    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:11.635903    2329 oci.go:668] temporary error verifying shutdown: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:11.635913    2329 oci.go:670] temporary error: container addons-20211117115052-2067 status is  but expect it to be exited
	I1117 11:51:11.635936    2329 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:12.276816    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:12.376021    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:12.376061    2329 oci.go:668] temporary error verifying shutdown: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:12.376076    2329 oci.go:670] temporary error: container addons-20211117115052-2067 status is  but expect it to be exited
	I1117 11:51:12.376097    2329 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:13.488437    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:13.588082    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:13.588129    2329 oci.go:668] temporary error verifying shutdown: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:13.588139    2329 oci.go:670] temporary error: container addons-20211117115052-2067 status is  but expect it to be exited
	I1117 11:51:13.588159    2329 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:15.104690    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:15.202379    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:15.202421    2329 oci.go:668] temporary error verifying shutdown: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:15.202430    2329 oci.go:670] temporary error: container addons-20211117115052-2067 status is  but expect it to be exited
	I1117 11:51:15.202451    2329 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:18.252655    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:18.352862    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:18.352902    2329 oci.go:668] temporary error verifying shutdown: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:18.352910    2329 oci.go:670] temporary error: container addons-20211117115052-2067 status is  but expect it to be exited
	I1117 11:51:18.352932    2329 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:24.135245    2329 cli_runner.go:115] Run: docker container inspect addons-20211117115052-2067 --format={{.State.Status}}
	W1117 11:51:24.238039    2329 cli_runner.go:162] docker container inspect addons-20211117115052-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:51:24.238078    2329 oci.go:668] temporary error verifying shutdown: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:24.238087    2329 oci.go:670] temporary error: container addons-20211117115052-2067 status is  but expect it to be exited
	I1117 11:51:24.238111    2329 oci.go:87] couldn't shut down addons-20211117115052-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "addons-20211117115052-2067": docker container inspect addons-20211117115052-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	 
	I1117 11:51:24.238220    2329 cli_runner.go:115] Run: docker rm -f -v addons-20211117115052-2067
	I1117 11:51:24.351044    2329 cli_runner.go:115] Run: docker container inspect -f {{.Id}} addons-20211117115052-2067
	W1117 11:51:24.447214    2329 cli_runner.go:162] docker container inspect -f {{.Id}} addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:24.447345    2329 cli_runner.go:115] Run: docker network inspect addons-20211117115052-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:51:24.545260    2329 cli_runner.go:115] Run: docker network rm addons-20211117115052-2067
	I1117 11:51:27.399530    2329 cli_runner.go:168] Completed: docker network rm addons-20211117115052-2067: (2.854238156s)
	W1117 11:51:27.399803    2329 delete.go:139] delete failed (probably ok) <nil>
	I1117 11:51:27.399810    2329 fix.go:120] Sleeping 1 second for extra luck!
	I1117 11:51:28.404367    2329 start.go:126] createHost starting for "" (driver="docker")
	I1117 11:51:28.431679    2329 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 11:51:28.431922    2329 start.go:160] libmachine.API.Create for "addons-20211117115052-2067" (driver="docker")
	I1117 11:51:28.431981    2329 client.go:168] LocalClient.Create starting
	I1117 11:51:28.432162    2329 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 11:51:28.432242    2329 main.go:130] libmachine: Decoding PEM data...
	I1117 11:51:28.432268    2329 main.go:130] libmachine: Parsing certificate...
	I1117 11:51:28.432385    2329 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 11:51:28.432438    2329 main.go:130] libmachine: Decoding PEM data...
	I1117 11:51:28.432463    2329 main.go:130] libmachine: Parsing certificate...
	I1117 11:51:28.433634    2329 cli_runner.go:115] Run: docker network inspect addons-20211117115052-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 11:51:28.534747    2329 cli_runner.go:162] docker network inspect addons-20211117115052-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 11:51:28.534897    2329 network_create.go:254] running [docker network inspect addons-20211117115052-2067] to gather additional debugging logs...
	I1117 11:51:28.534920    2329 cli_runner.go:115] Run: docker network inspect addons-20211117115052-2067
	W1117 11:51:28.631063    2329 cli_runner.go:162] docker network inspect addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:28.631087    2329 network_create.go:257] error running [docker network inspect addons-20211117115052-2067]: docker network inspect addons-20211117115052-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20211117115052-2067
	I1117 11:51:28.631099    2329 network_create.go:259] output of [docker network inspect addons-20211117115052-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20211117115052-2067
	
	** /stderr **
	I1117 11:51:28.631205    2329 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:51:28.727092    2329 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00012c178] amended:false}} dirty:map[] misses:0}
	I1117 11:51:28.727136    2329 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:51:28.727315    2329 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00012c178] amended:true}} dirty:map[192.168.49.0:0xc00012c178 192.168.58.0:0xc000b14440] misses:0}
	I1117 11:51:28.727330    2329 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:51:28.727343    2329 network_create.go:106] attempt to create docker network addons-20211117115052-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 11:51:28.727440    2329 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117115052-2067
	I1117 11:51:32.506214    2329 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117115052-2067: (3.778763314s)
	I1117 11:51:32.506245    2329 network_create.go:90] docker network addons-20211117115052-2067 192.168.58.0/24 created
	I1117 11:51:32.506256    2329 kic.go:106] calculated static IP "192.168.58.2" for the "addons-20211117115052-2067" container
	I1117 11:51:32.506360    2329 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 11:51:32.603114    2329 cli_runner.go:115] Run: docker volume create addons-20211117115052-2067 --label name.minikube.sigs.k8s.io=addons-20211117115052-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 11:51:32.718820    2329 oci.go:102] Successfully created a docker volume addons-20211117115052-2067
	I1117 11:51:32.718944    2329 cli_runner.go:115] Run: docker run --rm --name addons-20211117115052-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211117115052-2067 --entrypoint /usr/bin/test -v addons-20211117115052-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 11:51:33.130674    2329 oci.go:106] Successfully prepared a docker volume addons-20211117115052-2067
	E1117 11:51:33.130721    2329 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 11:51:33.130730    2329 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:51:33.130731    2329 client.go:171] LocalClient.Create took 4.698777741s
	I1117 11:51:33.130748    2329 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 11:51:33.130858    2329 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117115052-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 11:51:35.131199    2329 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:51:35.131288    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:35.267273    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:35.267423    2329 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:35.453028    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:35.564611    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:35.564704    2329 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:35.903016    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:36.063280    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:36.063453    2329 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:36.531131    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:36.649975    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	W1117 11:51:36.650073    2329 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	
	W1117 11:51:36.650091    2329 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:36.650101    2329 start.go:129] duration metric: createHost completed in 8.245758965s
	I1117 11:51:36.650172    2329 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:51:36.650240    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:36.761794    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:36.761878    2329 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:36.966509    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:37.085545    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:37.085625    2329 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:37.383451    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:37.502220    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	I1117 11:51:37.502322    2329 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:38.174350    2329 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067
	W1117 11:51:38.282069    2329 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067 returned with exit code 1
	W1117 11:51:38.282163    2329 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	
	W1117 11:51:38.282184    2329 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117115052-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117115052-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117115052-2067
	I1117 11:51:38.282194    2329 fix.go:57] fixHost completed within 29.943416916s
	I1117 11:51:38.282202    2329 start.go:80] releasing machines lock for "addons-20211117115052-2067", held for 29.943489521s
	W1117 11:51:38.282337    2329 out.go:241] * Failed to start docker container. Running "minikube delete -p addons-20211117115052-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p addons-20211117115052-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:51:38.343054    2329 out.go:176] 
	W1117 11:51:38.343274    2329 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 11:51:38.343286    2329 out.go:241] * 
	* 
	W1117 11:51:38.344445    2329 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 11:51:38.422836    2329 out.go:176] 

                                                
                                                
** /stderr **
addons_test.go:78: out/minikube-darwin-amd64 start -p addons-20211117115052-2067 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 80
--- FAIL: TestAddons/Setup (45.87s)

                                                
                                    
x
+
TestCertOptions (53.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20211117122537-2067 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-20211117122537-2067 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: exit status 80 (47.965202327s)

                                                
                                                
-- stdout --
	* [cert-options-20211117122537-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node cert-options-20211117122537-2067 in cluster cert-options-20211117122537-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-options-20211117122537-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:25:43.906953   14546 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:26:19.623696   14546 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-options-20211117122537-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:52: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-20211117122537-2067 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost" : exit status 80
cert_options_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20211117122537-2067 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-20211117122537-2067 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 80 (300.234111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117122537-2067": docker container inspect cert-options-20211117122537-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117122537-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_c1f8366d59c5f8f6460a712ebd6036fcc73bcb99_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:63: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-20211117122537-2067 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 80
cert_options_test.go:70: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:70: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:70: apiserver cert does not include localhost in SAN.
cert_options_test.go:70: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:83: failed to inspect container for the port get port 8555 for "cert-options-20211117122537-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20211117122537-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: cert-options-20211117122537-2067
cert_options_test.go:86: expected to get a non-zero forwarded port but got 0
cert_options_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20211117122537-2067 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-20211117122537-2067 -- "sudo cat /etc/kubernetes/admin.conf": exit status 80 (246.72918ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117122537-2067": docker container inspect cert-options-20211117122537-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117122537-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_e59a677a82728474bde049b1a4510f5e357f9593_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:103: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-20211117122537-2067 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 80
cert_options_test.go:107: Internal minikube kubeconfig (admin.conf) does not containe the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117122537-2067": docker container inspect cert-options-20211117122537-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117122537-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_e59a677a82728474bde049b1a4510f5e357f9593_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:110: *** TestCertOptions FAILED at 2021-11-17 12:26:25.706209 -0800 PST m=+2181.438929818
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20211117122537-2067
helpers_test.go:235: (dbg) docker inspect cert-options-20211117122537-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "cert-options-20211117122537-2067",
	        "Id": "3c673089dc7da16305d2d3429702c38c7250c8f0ea378fa14e8003c1482970cf",
	        "Created": "2021-11-17T20:26:14.181565697Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20211117122537-2067 -n cert-options-20211117122537-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20211117122537-2067 -n cert-options-20211117122537-2067: exit status 7 (152.195529ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:26:26.005197   14790 status.go:247] status error: host: state: unknown state "cert-options-20211117122537-2067": docker container inspect cert-options-20211117122537-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117122537-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-20211117122537-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-options-20211117122537-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20211117122537-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20211117122537-2067: (4.922459599s)
--- FAIL: TestCertOptions (53.84s)

                                                
                                    
x
+
TestCertExpiration (317.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20211117122341-2067 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-20211117122341-2067 --memory=2048 --cert-expiration=3m --driver=docker : exit status 80 (58.93028323s)

                                                
                                                
-- stdout --
	* [cert-expiration-20211117122341-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node cert-expiration-20211117122341-2067 in cluster cert-expiration-20211117122341-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117122341-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:23:51.728325   13772 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:24:34.685038   13772 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117122341-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:126: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-20211117122341-2067 --memory=2048 --cert-expiration=3m --driver=docker " : exit status 80

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20211117122341-2067 --memory=2048 --cert-expiration=8760h --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-20211117122341-2067 --memory=2048 --cert-expiration=8760h --driver=docker : exit status 80 (1m9.535528375s)

                                                
                                                
-- stdout --
	* [cert-expiration-20211117122341-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20211117122341-2067 in cluster cert-expiration-20211117122341-2067
	* Pulling base image ...
	* docker "cert-expiration-20211117122341-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117122341-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:28:06.130417   15185 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:28:43.733902   15185 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117122341-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:134: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-20211117122341-2067 --memory=2048 --cert-expiration=8760h --driver=docker " : exit status 80
cert_options_test.go:137: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20211117122341-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20211117122341-2067 in cluster cert-expiration-20211117122341-2067
	* Pulling base image ...
	* docker "cert-expiration-20211117122341-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117122341-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:28:06.130417   15185 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:28:43.733902   15185 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117122341-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:139: *** TestCertExpiration FAILED at 2021-11-17 12:28:49.555994 -0800 PST m=+2325.289610036
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20211117122341-2067
helpers_test.go:235: (dbg) docker inspect cert-expiration-20211117122341-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "cert-expiration-20211117122341-2067",
	        "Id": "92fdede66ac97b040dd1ce03d8fe3e75ec994abe7a05c5899130fadf75ee5cc9",
	        "Created": "2021-11-17T20:28:35.546036044Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-20211117122341-2067 -n cert-expiration-20211117122341-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-20211117122341-2067 -n cert-expiration-20211117122341-2067: exit status 7 (175.297142ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:28:49.939926   15824 status.go:247] status error: host: state: unknown state "cert-expiration-20211117122341-2067": docker container inspect cert-expiration-20211117122341-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-expiration-20211117122341-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-20211117122341-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-expiration-20211117122341-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20211117122341-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20211117122341-2067: (8.626340525s)
--- FAIL: TestCertExpiration (317.49s)

                                                
                                    
x
+
TestDockerFlags (57.36s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20211117122439-2067 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-20211117122439-2067 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 80 (48.149632924s)

                                                
                                                
-- stdout --
	* [docker-flags-20211117122439-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node docker-flags-20211117122439-2067 in cluster docker-flags-20211117122439-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-20211117122439-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:24:39.789508   14259 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:24:39.789673   14259 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:24:39.789678   14259 out.go:310] Setting ErrFile to fd 2...
	I1117 12:24:39.789682   14259 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:24:39.789778   14259 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:24:39.790176   14259 out.go:304] Setting JSON to false
	I1117 12:24:39.821288   14259 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3254,"bootTime":1637177425,"procs":338,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:24:39.821392   14259 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:24:39.905142   14259 out.go:176] * [docker-flags-20211117122439-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:24:39.905289   14259 notify.go:174] Checking for updates...
	I1117 12:24:39.978137   14259 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:24:40.032135   14259 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:24:40.055865   14259 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:24:40.092018   14259 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:24:40.092548   14259 config.go:176] Loaded profile config "cert-expiration-20211117122341-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:24:40.092658   14259 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:24:40.092703   14259 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:24:40.216906   14259 docker.go:132] docker version: linux-20.10.5
	I1117 12:24:40.217117   14259 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:24:40.573652   14259 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 20:24:40.378659297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:24:40.629090   14259 out.go:176] * Using the docker driver based on user configuration
	I1117 12:24:40.629134   14259 start.go:280] selected driver: docker
	I1117 12:24:40.629144   14259 start.go:775] validating driver "docker" against <nil>
	I1117 12:24:40.629162   14259 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:24:40.631840   14259 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:24:40.790198   14259 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:50 SystemTime:2021-11-17 20:24:40.749499293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:24:40.790290   14259 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:24:40.790410   14259 start_flags.go:753] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1117 12:24:40.790426   14259 cni.go:93] Creating CNI manager for ""
	I1117 12:24:40.790433   14259 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:24:40.790441   14259 start_flags.go:282] config:
	{Name:docker-flags-20211117122439-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:docker-flags-20211117122439-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:24:40.849133   14259 out.go:176] * Starting control plane node docker-flags-20211117122439-2067 in cluster docker-flags-20211117122439-2067
	I1117 12:24:40.849213   14259 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:24:40.881663   14259 out.go:176] * Pulling base image ...
	I1117 12:24:40.881712   14259 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:24:40.881765   14259 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:24:40.881782   14259 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:24:40.881786   14259 cache.go:57] Caching tarball of preloaded images
	I1117 12:24:40.881939   14259 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:24:40.881961   14259 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:24:40.882679   14259 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/docker-flags-20211117122439-2067/config.json ...
	I1117 12:24:40.882777   14259 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/docker-flags-20211117122439-2067/config.json: {Name:mk5d068bdb19227dbcf3bb7e5ac43d59b816512d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:24:41.005456   14259 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:24:41.005475   14259 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:24:41.005507   14259 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:24:41.005564   14259 start.go:313] acquiring machines lock for docker-flags-20211117122439-2067: {Name:mk2c0ff1c5e8a773556b6f495aa727cf8fa77a96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:24:41.005721   14259 start.go:317] acquired machines lock for "docker-flags-20211117122439-2067" in 138.406µs
	I1117 12:24:41.005749   14259 start.go:89] Provisioning new machine with config: &{Name:docker-flags-20211117122439-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:docker-flags-20211117122439-2067 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:24:41.005824   14259 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:24:41.032682   14259 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:24:41.033066   14259 start.go:160] libmachine.API.Create for "docker-flags-20211117122439-2067" (driver="docker")
	I1117 12:24:41.033111   14259 client.go:168] LocalClient.Create starting
	I1117 12:24:41.033299   14259 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:24:41.033373   14259 main.go:130] libmachine: Decoding PEM data...
	I1117 12:24:41.033404   14259 main.go:130] libmachine: Parsing certificate...
	I1117 12:24:41.033525   14259 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:24:41.033582   14259 main.go:130] libmachine: Decoding PEM data...
	I1117 12:24:41.033598   14259 main.go:130] libmachine: Parsing certificate...
	I1117 12:24:41.034871   14259 cli_runner.go:115] Run: docker network inspect docker-flags-20211117122439-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:24:41.139227   14259 cli_runner.go:162] docker network inspect docker-flags-20211117122439-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:24:41.139337   14259 network_create.go:254] running [docker network inspect docker-flags-20211117122439-2067] to gather additional debugging logs...
	I1117 12:24:41.139355   14259 cli_runner.go:115] Run: docker network inspect docker-flags-20211117122439-2067
	W1117 12:24:41.240864   14259 cli_runner.go:162] docker network inspect docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:24:41.240898   14259 network_create.go:257] error running [docker network inspect docker-flags-20211117122439-2067]: docker network inspect docker-flags-20211117122439-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20211117122439-2067
	I1117 12:24:41.240914   14259 network_create.go:259] output of [docker network inspect docker-flags-20211117122439-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20211117122439-2067
	
	** /stderr **
	I1117 12:24:41.241021   14259 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:24:41.344454   14259 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e260] misses:0}
	I1117 12:24:41.344492   14259 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:24:41.344509   14259 network_create.go:106] attempt to create docker network docker-flags-20211117122439-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:24:41.344584   14259 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117122439-2067
	I1117 12:24:46.192271   14259 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117122439-2067: (4.84767071s)
	I1117 12:24:46.192301   14259 network_create.go:90] docker network docker-flags-20211117122439-2067 192.168.49.0/24 created
	I1117 12:24:46.192322   14259 kic.go:106] calculated static IP "192.168.49.2" for the "docker-flags-20211117122439-2067" container
	I1117 12:24:46.192439   14259 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:24:46.297688   14259 cli_runner.go:115] Run: docker volume create docker-flags-20211117122439-2067 --label name.minikube.sigs.k8s.io=docker-flags-20211117122439-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:24:46.407172   14259 oci.go:102] Successfully created a docker volume docker-flags-20211117122439-2067
	I1117 12:24:46.407287   14259 cli_runner.go:115] Run: docker run --rm --name docker-flags-20211117122439-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20211117122439-2067 --entrypoint /usr/bin/test -v docker-flags-20211117122439-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:24:46.892413   14259 oci.go:106] Successfully prepared a docker volume docker-flags-20211117122439-2067
	I1117 12:24:46.892480   14259 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	E1117 12:24:46.892476   14259 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:24:46.892509   14259 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:24:46.892508   14259 client.go:171] LocalClient.Create took 5.859425599s
	I1117 12:24:46.892635   14259 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117122439-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:24:48.892850   14259 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:24:48.892948   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:24:49.026052   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:24:49.026217   14259 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:49.304525   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:24:49.430935   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:24:49.431010   14259 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:49.979015   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:24:50.103737   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:24:50.103813   14259 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:50.763295   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:24:50.873392   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	W1117 12:24:50.873487   14259 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	
	W1117 12:24:50.873519   14259 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:50.873536   14259 start.go:129] duration metric: createHost completed in 9.867769437s
	I1117 12:24:50.873550   14259 start.go:80] releasing machines lock for "docker-flags-20211117122439-2067", held for 9.867877238s
	W1117 12:24:50.873566   14259 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:24:50.874027   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:24:50.996654   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:50.996735   14259 delete.go:82] Unable to get host status for docker-flags-20211117122439-2067, assuming it has already been deleted: state: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	W1117 12:24:50.996955   14259 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:24:50.996986   14259 start.go:547] Will try again in 5 seconds ...
	I1117 12:24:52.737751   14259 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117122439-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.845126475s)
	I1117 12:24:52.737766   14259 kic.go:188] duration metric: took 5.845294 seconds to extract preloaded images to volume
	I1117 12:24:56.004025   14259 start.go:313] acquiring machines lock for docker-flags-20211117122439-2067: {Name:mk2c0ff1c5e8a773556b6f495aa727cf8fa77a96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:24:56.004216   14259 start.go:317] acquired machines lock for "docker-flags-20211117122439-2067" in 159.93µs
	I1117 12:24:56.004278   14259 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:24:56.004293   14259 fix.go:55] fixHost starting: 
	I1117 12:24:56.004762   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:24:56.107408   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:56.107454   14259 fix.go:108] recreateIfNeeded on docker-flags-20211117122439-2067: state= err=unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:56.107473   14259 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:24:56.135587   14259 out.go:176] * docker "docker-flags-20211117122439-2067" container is missing, will recreate.
	I1117 12:24:56.135640   14259 delete.go:124] DEMOLISHING docker-flags-20211117122439-2067 ...
	I1117 12:24:56.135890   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:24:56.237632   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:24:56.237675   14259 stop.go:75] unable to get state: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:56.237690   14259 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:56.238094   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:24:56.342403   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:56.342447   14259 delete.go:82] Unable to get host status for docker-flags-20211117122439-2067, assuming it has already been deleted: state: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:56.342539   14259 cli_runner.go:115] Run: docker container inspect -f {{.Id}} docker-flags-20211117122439-2067
	W1117 12:24:56.445560   14259 cli_runner.go:162] docker container inspect -f {{.Id}} docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:24:56.445586   14259 kic.go:360] could not find the container docker-flags-20211117122439-2067 to remove it. will try anyways
	I1117 12:24:56.445662   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:24:56.549885   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:24:56.549932   14259 oci.go:83] error getting container status, will try to delete anyways: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:56.550011   14259 cli_runner.go:115] Run: docker exec --privileged -t docker-flags-20211117122439-2067 /bin/bash -c "sudo init 0"
	W1117 12:24:56.653444   14259 cli_runner.go:162] docker exec --privileged -t docker-flags-20211117122439-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:24:56.653469   14259 oci.go:656] error shutdown docker-flags-20211117122439-2067: docker exec --privileged -t docker-flags-20211117122439-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:57.654138   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:24:57.760854   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:57.760901   14259 oci.go:668] temporary error verifying shutdown: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:57.760921   14259 oci.go:670] temporary error: container docker-flags-20211117122439-2067 status is  but expect it to be exited
	I1117 12:24:57.760947   14259 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:58.229066   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:24:58.333175   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:58.333215   14259 oci.go:668] temporary error verifying shutdown: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:58.333225   14259 oci.go:670] temporary error: container docker-flags-20211117122439-2067 status is  but expect it to be exited
	I1117 12:24:58.333247   14259 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:59.229109   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:24:59.350403   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:59.350445   14259 oci.go:668] temporary error verifying shutdown: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:59.350454   14259 oci.go:670] temporary error: container docker-flags-20211117122439-2067 status is  but expect it to be exited
	I1117 12:24:59.350476   14259 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:24:59.988208   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:25:00.092843   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:25:00.092880   14259 oci.go:668] temporary error verifying shutdown: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:00.092889   14259 oci.go:670] temporary error: container docker-flags-20211117122439-2067 status is  but expect it to be exited
	I1117 12:25:00.092921   14259 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:01.204026   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:25:01.306790   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:25:01.306831   14259 oci.go:668] temporary error verifying shutdown: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:01.306841   14259 oci.go:670] temporary error: container docker-flags-20211117122439-2067 status is  but expect it to be exited
	I1117 12:25:01.306862   14259 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:02.823614   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:25:02.926298   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:25:02.926344   14259 oci.go:668] temporary error verifying shutdown: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:02.926354   14259 oci.go:670] temporary error: container docker-flags-20211117122439-2067 status is  but expect it to be exited
	I1117 12:25:02.926379   14259 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:05.970280   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:25:06.075651   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:25:06.075692   14259 oci.go:668] temporary error verifying shutdown: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:06.075703   14259 oci.go:670] temporary error: container docker-flags-20211117122439-2067 status is  but expect it to be exited
	I1117 12:25:06.075725   14259 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:11.860516   14259 cli_runner.go:115] Run: docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}
	W1117 12:25:11.964712   14259 cli_runner.go:162] docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:25:11.964753   14259 oci.go:668] temporary error verifying shutdown: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:11.964773   14259 oci.go:670] temporary error: container docker-flags-20211117122439-2067 status is  but expect it to be exited
	I1117 12:25:11.964800   14259 oci.go:87] couldn't shut down docker-flags-20211117122439-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	 
	I1117 12:25:11.964884   14259 cli_runner.go:115] Run: docker rm -f -v docker-flags-20211117122439-2067
	I1117 12:25:12.065864   14259 cli_runner.go:115] Run: docker container inspect -f {{.Id}} docker-flags-20211117122439-2067
	W1117 12:25:12.167991   14259 cli_runner.go:162] docker container inspect -f {{.Id}} docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:25:12.168108   14259 cli_runner.go:115] Run: docker network inspect docker-flags-20211117122439-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:25:12.269321   14259 cli_runner.go:115] Run: docker network rm docker-flags-20211117122439-2067
	I1117 12:25:15.651783   14259 cli_runner.go:168] Completed: docker network rm docker-flags-20211117122439-2067: (3.382432042s)
	W1117 12:25:15.652058   14259 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:25:15.652065   14259 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:25:16.662201   14259 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:25:16.689579   14259 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:25:16.689779   14259 start.go:160] libmachine.API.Create for "docker-flags-20211117122439-2067" (driver="docker")
	I1117 12:25:16.689833   14259 client.go:168] LocalClient.Create starting
	I1117 12:25:16.690076   14259 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:25:16.711271   14259 main.go:130] libmachine: Decoding PEM data...
	I1117 12:25:16.711327   14259 main.go:130] libmachine: Parsing certificate...
	I1117 12:25:16.711484   14259 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:25:16.711579   14259 main.go:130] libmachine: Decoding PEM data...
	I1117 12:25:16.711599   14259 main.go:130] libmachine: Parsing certificate...
	I1117 12:25:16.712475   14259 cli_runner.go:115] Run: docker network inspect docker-flags-20211117122439-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:25:16.814410   14259 cli_runner.go:162] docker network inspect docker-flags-20211117122439-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:25:16.814503   14259 network_create.go:254] running [docker network inspect docker-flags-20211117122439-2067] to gather additional debugging logs...
	I1117 12:25:16.814521   14259 cli_runner.go:115] Run: docker network inspect docker-flags-20211117122439-2067
	W1117 12:25:16.915175   14259 cli_runner.go:162] docker network inspect docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:25:16.915198   14259 network_create.go:257] error running [docker network inspect docker-flags-20211117122439-2067]: docker network inspect docker-flags-20211117122439-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20211117122439-2067
	I1117 12:25:16.915211   14259 network_create.go:259] output of [docker network inspect docker-flags-20211117122439-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20211117122439-2067
	
	** /stderr **
	I1117 12:25:16.915291   14259 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:25:17.015423   14259 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e260] amended:false}} dirty:map[] misses:0}
	I1117 12:25:17.015457   14259 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:25:17.015632   14259 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e260] amended:true}} dirty:map[192.168.49.0:0xc00000e260 192.168.58.0:0xc000186278] misses:0}
	I1117 12:25:17.015645   14259 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:25:17.015652   14259 network_create.go:106] attempt to create docker network docker-flags-20211117122439-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:25:17.015727   14259 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117122439-2067
	I1117 12:25:22.014466   14259 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117122439-2067: (4.998686147s)
	I1117 12:25:22.014496   14259 network_create.go:90] docker network docker-flags-20211117122439-2067 192.168.58.0/24 created
	I1117 12:25:22.014521   14259 kic.go:106] calculated static IP "192.168.58.2" for the "docker-flags-20211117122439-2067" container
	I1117 12:25:22.014641   14259 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:25:22.115360   14259 cli_runner.go:115] Run: docker volume create docker-flags-20211117122439-2067 --label name.minikube.sigs.k8s.io=docker-flags-20211117122439-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:25:22.215828   14259 oci.go:102] Successfully created a docker volume docker-flags-20211117122439-2067
	I1117 12:25:22.215983   14259 cli_runner.go:115] Run: docker run --rm --name docker-flags-20211117122439-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20211117122439-2067 --entrypoint /usr/bin/test -v docker-flags-20211117122439-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:25:22.620802   14259 oci.go:106] Successfully prepared a docker volume docker-flags-20211117122439-2067
	E1117 12:25:22.620862   14259 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:25:22.620872   14259 client.go:171] LocalClient.Create took 5.931070942s
	I1117 12:25:22.620895   14259 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:25:22.620911   14259 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:25:22.621013   14259 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117122439-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:25:24.621251   14259 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:25:24.621338   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:25:24.741245   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:25:24.763081   14259 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:24.942325   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:25:25.067753   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:25:25.067865   14259 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:25.398607   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:25:25.520330   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:25:25.520419   14259 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:25.981108   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:25:26.110349   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	W1117 12:25:26.110435   14259 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	
	W1117 12:25:26.110460   14259 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:26.110470   14259 start.go:129] duration metric: createHost completed in 9.448250462s
	I1117 12:25:26.110526   14259 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:25:26.110591   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:25:26.227062   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:25:26.227136   14259 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:26.423929   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:25:26.541688   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:25:26.541818   14259 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:26.841202   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:25:26.963016   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	I1117 12:25:26.963150   14259 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:27.626976   14259 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067
	W1117 12:25:27.747229   14259 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067 returned with exit code 1
	W1117 12:25:27.747362   14259 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	
	W1117 12:25:27.747407   14259 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117122439-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117122439-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	I1117 12:25:27.747428   14259 fix.go:57] fixHost completed within 31.743332672s
	I1117 12:25:27.747442   14259 start.go:80] releasing machines lock for "docker-flags-20211117122439-2067", held for 31.743409422s
	W1117 12:25:27.747629   14259 out.go:241] * Failed to start docker container. Running "minikube delete -p docker-flags-20211117122439-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p docker-flags-20211117122439-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:25:27.797849   14259 out.go:176] 
	W1117 12:25:27.797979   14259 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:25:27.797991   14259 out.go:241] * 
	* 
	W1117 12:25:27.798580   14259 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:25:27.875920   14259 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:48: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-20211117122439-2067 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20211117122439-2067 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-20211117122439-2067 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (336.651138ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:53: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-20211117122439-2067 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:58: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:58: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20211117122439-2067 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:62: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-20211117122439-2067 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (212.237044ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:64: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-20211117122439-2067 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:68: expected "out/minikube-darwin-amd64 -p docker-flags-20211117122439-2067 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:642: *** TestDockerFlags FAILED at 2021-11-17 12:25:28.436223 -0800 PST m=+2124.168587056
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20211117122439-2067
helpers_test.go:235: (dbg) docker inspect docker-flags-20211117122439-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-20211117122439-2067",
	        "Id": "fc2d9e3772fb52079862a8e400380240755665abb5d483bd0c19a7c3874d00af",
	        "Created": "2021-11-17T20:25:17.11110049Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-20211117122439-2067 -n docker-flags-20211117122439-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-20211117122439-2067 -n docker-flags-20211117122439-2067: exit status 7 (147.436347ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:25:28.687458   14496 status.go:247] status error: host: state: unknown state "docker-flags-20211117122439-2067": docker container inspect docker-flags-20211117122439-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117122439-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-20211117122439-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-20211117122439-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20211117122439-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20211117122439-2067: (8.402493836s)
--- FAIL: TestDockerFlags (57.36s)

                                                
                                    
x
+
TestForceSystemdFlag (62.7s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20211117122227-2067 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-20211117122227-2067 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 80 (55.920589708s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20211117122227-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node force-systemd-flag-20211117122227-2067 in cluster force-systemd-flag-20211117122227-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-20211117122227-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:22:27.844138   13172 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:22:27.844275   13172 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:22:27.844280   13172 out.go:310] Setting ErrFile to fd 2...
	I1117 12:22:27.844283   13172 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:22:27.844380   13172 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:22:27.844697   13172 out.go:304] Setting JSON to false
	I1117 12:22:27.869185   13172 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3122,"bootTime":1637177425,"procs":323,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:22:27.869282   13172 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:22:27.896755   13172 out.go:176] * [force-systemd-flag-20211117122227-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:22:27.896929   13172 notify.go:174] Checking for updates...
	I1117 12:22:27.944136   13172 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:22:27.970588   13172 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:22:27.996314   13172 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:22:28.022103   13172 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:22:28.022526   13172 config.go:176] Loaded profile config "NoKubernetes-20211117122048-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1117 12:22:28.022610   13172 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:22:28.022657   13172 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:22:28.113565   13172 docker.go:132] docker version: linux-20.10.5
	I1117 12:22:28.113733   13172 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:22:28.265918   13172 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:49 SystemTime:2021-11-17 20:22:28.219394131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:22:28.314470   13172 out.go:176] * Using the docker driver based on user configuration
	I1117 12:22:28.314538   13172 start.go:280] selected driver: docker
	I1117 12:22:28.314551   13172 start.go:775] validating driver "docker" against <nil>
	I1117 12:22:28.314580   13172 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:22:28.317894   13172 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:22:28.470018   13172 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:49 SystemTime:2021-11-17 20:22:28.423049798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:22:28.470100   13172 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:22:28.470224   13172 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 12:22:28.470240   13172 cni.go:93] Creating CNI manager for ""
	I1117 12:22:28.470246   13172 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:22:28.470257   13172 start_flags.go:282] config:
	{Name:force-systemd-flag-20211117122227-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-flag-20211117122227-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:22:28.517687   13172 out.go:176] * Starting control plane node force-systemd-flag-20211117122227-2067 in cluster force-systemd-flag-20211117122227-2067
	I1117 12:22:28.517753   13172 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:22:28.543776   13172 out.go:176] * Pulling base image ...
	I1117 12:22:28.543857   13172 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:22:28.543934   13172 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:22:28.543945   13172 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:22:28.543970   13172 cache.go:57] Caching tarball of preloaded images
	I1117 12:22:28.544283   13172 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:22:28.544308   13172 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:22:28.545310   13172 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/force-systemd-flag-20211117122227-2067/config.json ...
	I1117 12:22:28.545471   13172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/force-systemd-flag-20211117122227-2067/config.json: {Name:mk0591a7cd3eb5431f92c015a691880367ff887b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:22:28.658539   13172 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:22:28.658554   13172 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:22:28.658566   13172 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:22:28.658601   13172 start.go:313] acquiring machines lock for force-systemd-flag-20211117122227-2067: {Name:mkd200a1a1453670c86a38a6ef7c86050250f428 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:22:28.658730   13172 start.go:317] acquired machines lock for "force-systemd-flag-20211117122227-2067" in 117.47µs
	I1117 12:22:28.658756   13172 start.go:89] Provisioning new machine with config: &{Name:force-systemd-flag-20211117122227-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-flag-20211117122227-2067 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:22:28.658836   13172 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:22:28.706311   13172 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:22:28.706628   13172 start.go:160] libmachine.API.Create for "force-systemd-flag-20211117122227-2067" (driver="docker")
	I1117 12:22:28.706672   13172 client.go:168] LocalClient.Create starting
	I1117 12:22:28.706812   13172 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:22:28.706892   13172 main.go:130] libmachine: Decoding PEM data...
	I1117 12:22:28.706923   13172 main.go:130] libmachine: Parsing certificate...
	I1117 12:22:28.707037   13172 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:22:28.707088   13172 main.go:130] libmachine: Decoding PEM data...
	I1117 12:22:28.707107   13172 main.go:130] libmachine: Parsing certificate...
	I1117 12:22:28.707912   13172 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117122227-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:22:28.810749   13172 cli_runner.go:162] docker network inspect force-systemd-flag-20211117122227-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:22:28.810854   13172 network_create.go:254] running [docker network inspect force-systemd-flag-20211117122227-2067] to gather additional debugging logs...
	I1117 12:22:28.810874   13172 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117122227-2067
	W1117 12:22:28.913252   13172 cli_runner.go:162] docker network inspect force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:22:28.913277   13172 network_create.go:257] error running [docker network inspect force-systemd-flag-20211117122227-2067]: docker network inspect force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20211117122227-2067
	I1117 12:22:28.913292   13172 network_create.go:259] output of [docker network inspect force-systemd-flag-20211117122227-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20211117122227-2067
	
	** /stderr **
	I1117 12:22:28.913389   13172 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:22:29.015194   13172 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000384110] misses:0}
	I1117 12:22:29.015229   13172 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:22:29.015250   13172 network_create.go:106] attempt to create docker network force-systemd-flag-20211117122227-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:22:29.015333   13172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117122227-2067
	W1117 12:22:29.117206   13172 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117122227-2067 returned with exit code 1
	W1117 12:22:29.117245   13172 network_create.go:98] failed to create docker network force-systemd-flag-20211117122227-2067 192.168.49.0/24, will retry: subnet is taken
	I1117 12:22:29.117486   13172 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000384110] amended:false}} dirty:map[] misses:0}
	I1117 12:22:29.117504   13172 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:22:29.117684   13172 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000384110] amended:true}} dirty:map[192.168.49.0:0xc000384110 192.168.58.0:0xc00027e478] misses:0}
	I1117 12:22:29.117696   13172 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:22:29.117702   13172 network_create.go:106] attempt to create docker network force-systemd-flag-20211117122227-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:22:29.117778   13172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117122227-2067
	I1117 12:22:36.555646   13172 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117122227-2067: (7.437873924s)
	I1117 12:22:36.555667   13172 network_create.go:90] docker network force-systemd-flag-20211117122227-2067 192.168.58.0/24 created
	I1117 12:22:36.555686   13172 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-flag-20211117122227-2067" container
	I1117 12:22:36.555799   13172 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:22:36.655911   13172 cli_runner.go:115] Run: docker volume create force-systemd-flag-20211117122227-2067 --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117122227-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:22:36.758122   13172 oci.go:102] Successfully created a docker volume force-systemd-flag-20211117122227-2067
	I1117 12:22:36.758271   13172 cli_runner.go:115] Run: docker run --rm --name force-systemd-flag-20211117122227-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117122227-2067 --entrypoint /usr/bin/test -v force-systemd-flag-20211117122227-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:22:37.235048   13172 oci.go:106] Successfully prepared a docker volume force-systemd-flag-20211117122227-2067
	I1117 12:22:37.235098   13172 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	E1117 12:22:37.235101   13172 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:22:37.235120   13172 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:22:37.235128   13172 client.go:171] LocalClient.Create took 8.528500634s
	I1117 12:22:37.235257   13172 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117122227-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:22:39.235764   13172 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:22:39.235871   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:22:39.359802   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:22:39.359931   13172 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:39.636359   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:22:39.757762   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:22:39.757839   13172 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:40.305588   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:22:40.425165   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:22:40.425246   13172 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:41.080509   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:22:41.199741   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	W1117 12:22:41.199827   13172 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	
	W1117 12:22:41.199856   13172 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:41.199868   13172 start.go:129] duration metric: createHost completed in 12.541104825s
	I1117 12:22:41.199874   13172 start.go:80] releasing machines lock for "force-systemd-flag-20211117122227-2067", held for 12.541215282s
	W1117 12:22:41.199892   13172 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:22:41.200457   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:41.328469   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:41.328525   13172 delete.go:82] Unable to get host status for force-systemd-flag-20211117122227-2067, assuming it has already been deleted: state: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	W1117 12:22:41.328676   13172 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:22:41.328694   13172 start.go:547] Will try again in 5 seconds ...
	I1117 12:22:43.466605   13172 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117122227-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.231339845s)
	I1117 12:22:43.466641   13172 kic.go:188] duration metric: took 6.231547 seconds to extract preloaded images to volume
	I1117 12:22:46.337055   13172 start.go:313] acquiring machines lock for force-systemd-flag-20211117122227-2067: {Name:mkd200a1a1453670c86a38a6ef7c86050250f428 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:22:46.337221   13172 start.go:317] acquired machines lock for "force-systemd-flag-20211117122227-2067" in 135.365µs
	I1117 12:22:46.337262   13172 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:22:46.337290   13172 fix.go:55] fixHost starting: 
	I1117 12:22:46.337745   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:46.442140   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:46.442191   13172 fix.go:108] recreateIfNeeded on force-systemd-flag-20211117122227-2067: state= err=unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:46.442205   13172 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:22:46.490809   13172 out.go:176] * docker "force-systemd-flag-20211117122227-2067" container is missing, will recreate.
	I1117 12:22:46.490842   13172 delete.go:124] DEMOLISHING force-systemd-flag-20211117122227-2067 ...
	I1117 12:22:46.491036   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:46.594538   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:22:46.594580   13172 stop.go:75] unable to get state: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:46.594592   13172 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:46.595011   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:46.698224   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:46.698266   13172 delete.go:82] Unable to get host status for force-systemd-flag-20211117122227-2067, assuming it has already been deleted: state: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:46.698347   13172 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-flag-20211117122227-2067
	W1117 12:22:46.800907   13172 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:22:46.800935   13172 kic.go:360] could not find the container force-systemd-flag-20211117122227-2067 to remove it. will try anyways
	I1117 12:22:46.801018   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:46.902812   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:22:46.902853   13172 oci.go:83] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:46.902943   13172 cli_runner.go:115] Run: docker exec --privileged -t force-systemd-flag-20211117122227-2067 /bin/bash -c "sudo init 0"
	W1117 12:22:47.004372   13172 cli_runner.go:162] docker exec --privileged -t force-systemd-flag-20211117122227-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:22:47.004404   13172 oci.go:656] error shutdown force-systemd-flag-20211117122227-2067: docker exec --privileged -t force-systemd-flag-20211117122227-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:48.014714   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:48.119430   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:48.119477   13172 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:48.119486   13172 oci.go:670] temporary error: container force-systemd-flag-20211117122227-2067 status is  but expect it to be exited
	I1117 12:22:48.119518   13172 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:48.588479   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:48.692826   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:48.692866   13172 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:48.692871   13172 oci.go:670] temporary error: container force-systemd-flag-20211117122227-2067 status is  but expect it to be exited
	I1117 12:22:48.692896   13172 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:49.588457   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:49.693181   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:49.693241   13172 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:49.693250   13172 oci.go:670] temporary error: container force-systemd-flag-20211117122227-2067 status is  but expect it to be exited
	I1117 12:22:49.693283   13172 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:50.338441   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:50.450811   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:50.450846   13172 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:50.450853   13172 oci.go:670] temporary error: container force-systemd-flag-20211117122227-2067 status is  but expect it to be exited
	I1117 12:22:50.450879   13172 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:51.562604   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:51.665531   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:51.665572   13172 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:51.665579   13172 oci.go:670] temporary error: container force-systemd-flag-20211117122227-2067 status is  but expect it to be exited
	I1117 12:22:51.665599   13172 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:53.180794   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:53.284185   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:53.284234   13172 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:53.284242   13172 oci.go:670] temporary error: container force-systemd-flag-20211117122227-2067 status is  but expect it to be exited
	I1117 12:22:53.284267   13172 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:56.334438   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:22:56.436584   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:56.436622   13172 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:22:56.436628   13172 oci.go:670] temporary error: container force-systemd-flag-20211117122227-2067 status is  but expect it to be exited
	I1117 12:22:56.436653   13172 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:02.218919   13172 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}
	W1117 12:23:02.320382   13172 cli_runner.go:162] docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:02.320422   13172 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:02.320430   13172 oci.go:670] temporary error: container force-systemd-flag-20211117122227-2067 status is  but expect it to be exited
	I1117 12:23:02.320457   13172 oci.go:87] couldn't shut down force-systemd-flag-20211117122227-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	 
	I1117 12:23:02.320549   13172 cli_runner.go:115] Run: docker rm -f -v force-systemd-flag-20211117122227-2067
	I1117 12:23:02.421512   13172 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-flag-20211117122227-2067
	W1117 12:23:02.523543   13172 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:23:02.523660   13172 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117122227-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:23:02.625660   13172 cli_runner.go:115] Run: docker network rm force-systemd-flag-20211117122227-2067
	I1117 12:23:07.638553   13172 cli_runner.go:168] Completed: docker network rm force-systemd-flag-20211117122227-2067: (5.012877512s)
	W1117 12:23:07.638832   13172 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:23:07.638839   13172 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:23:08.639688   13172 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:23:08.687318   13172 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:23:08.687417   13172 start.go:160] libmachine.API.Create for "force-systemd-flag-20211117122227-2067" (driver="docker")
	I1117 12:23:08.687459   13172 client.go:168] LocalClient.Create starting
	I1117 12:23:08.688155   13172 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:23:08.688222   13172 main.go:130] libmachine: Decoding PEM data...
	I1117 12:23:08.688241   13172 main.go:130] libmachine: Parsing certificate...
	I1117 12:23:08.688329   13172 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:23:08.688373   13172 main.go:130] libmachine: Decoding PEM data...
	I1117 12:23:08.688396   13172 main.go:130] libmachine: Parsing certificate...
	I1117 12:23:08.689037   13172 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117122227-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:23:08.789132   13172 cli_runner.go:162] docker network inspect force-systemd-flag-20211117122227-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:23:08.789256   13172 network_create.go:254] running [docker network inspect force-systemd-flag-20211117122227-2067] to gather additional debugging logs...
	I1117 12:23:08.789276   13172 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117122227-2067
	W1117 12:23:08.889839   13172 cli_runner.go:162] docker network inspect force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:23:08.889872   13172 network_create.go:257] error running [docker network inspect force-systemd-flag-20211117122227-2067]: docker network inspect force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20211117122227-2067
	I1117 12:23:08.889896   13172 network_create.go:259] output of [docker network inspect force-systemd-flag-20211117122227-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20211117122227-2067
	
	** /stderr **
	I1117 12:23:08.890005   13172 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:23:08.993319   13172 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000384110] amended:true}} dirty:map[192.168.49.0:0xc000384110 192.168.58.0:0xc00027e478] misses:0}
	I1117 12:23:08.993354   13172 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:23:08.993532   13172 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000384110] amended:true}} dirty:map[192.168.49.0:0xc000384110 192.168.58.0:0xc00027e478] misses:1}
	I1117 12:23:08.993541   13172 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:23:08.993707   13172 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000384110] amended:true}} dirty:map[192.168.49.0:0xc000384110 192.168.58.0:0xc00027e478 192.168.67.0:0xc00000e180] misses:1}
	I1117 12:23:08.993718   13172 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:23:08.993724   13172 network_create.go:106] attempt to create docker network force-systemd-flag-20211117122227-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:23:08.993805   13172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117122227-2067
	W1117 12:23:09.094319   13172 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117122227-2067 returned with exit code 1
	W1117 12:23:09.094360   13172 network_create.go:98] failed to create docker network force-systemd-flag-20211117122227-2067 192.168.67.0/24, will retry: subnet is taken
	I1117 12:23:09.094584   13172 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000384110] amended:true}} dirty:map[192.168.49.0:0xc000384110 192.168.58.0:0xc00027e478 192.168.67.0:0xc00000e180] misses:2}
	I1117 12:23:09.094604   13172 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:23:09.094779   13172 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000384110] amended:true}} dirty:map[192.168.49.0:0xc000384110 192.168.58.0:0xc00027e478 192.168.67.0:0xc00000e180 192.168.76.0:0xc00027e228] misses:2}
	I1117 12:23:09.094795   13172 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:23:09.094804   13172 network_create.go:106] attempt to create docker network force-systemd-flag-20211117122227-2067 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1117 12:23:09.094883   13172 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117122227-2067
	I1117 12:23:17.799929   13172 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117122227-2067: (8.705033005s)
	I1117 12:23:17.799955   13172 network_create.go:90] docker network force-systemd-flag-20211117122227-2067 192.168.76.0/24 created
	I1117 12:23:17.799974   13172 kic.go:106] calculated static IP "192.168.76.2" for the "force-systemd-flag-20211117122227-2067" container
	I1117 12:23:17.800090   13172 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:23:17.913691   13172 cli_runner.go:115] Run: docker volume create force-systemd-flag-20211117122227-2067 --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117122227-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:23:18.014530   13172 oci.go:102] Successfully created a docker volume force-systemd-flag-20211117122227-2067
	I1117 12:23:18.014682   13172 cli_runner.go:115] Run: docker run --rm --name force-systemd-flag-20211117122227-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117122227-2067 --entrypoint /usr/bin/test -v force-systemd-flag-20211117122227-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:23:18.463403   13172 oci.go:106] Successfully prepared a docker volume force-systemd-flag-20211117122227-2067
	E1117 12:23:18.463452   13172 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:23:18.463464   13172 client.go:171] LocalClient.Create took 9.776062165s
	I1117 12:23:18.463467   13172 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:23:18.463497   13172 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:23:18.463621   13172 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117122227-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:23:20.467489   13172 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:23:20.467594   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:23:20.612109   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:23:20.612252   13172 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:20.791082   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:23:20.912753   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:23:20.912863   13172 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:21.244159   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:23:21.362625   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:23:21.362712   13172 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:21.823873   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:23:21.944312   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	W1117 12:23:21.944403   13172 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	
	W1117 12:23:21.944420   13172 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:21.944431   13172 start.go:129] duration metric: createHost completed in 13.304804992s
	I1117 12:23:21.944515   13172 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:23:21.944590   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:23:22.059343   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:23:22.059433   13172 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:22.255575   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:23:22.371825   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:23:22.371915   13172 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:22.673812   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:23:22.791234   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	I1117 12:23:22.791375   13172 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:23.455980   13172 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067
	W1117 12:23:23.575979   13172 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067 returned with exit code 1
	W1117 12:23:23.576077   13172 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	
	W1117 12:23:23.576094   13172 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117122227-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117122227-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	I1117 12:23:23.576104   13172 fix.go:57] fixHost completed within 37.239063331s
	I1117 12:23:23.576115   13172 start.go:80] releasing machines lock for "force-systemd-flag-20211117122227-2067", held for 37.239113648s
	W1117 12:23:23.576290   13172 out.go:241] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-20211117122227-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-20211117122227-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:23:23.623717   13172 out.go:176] 
	W1117 12:23:23.623885   13172 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:23:23.623909   13172 out.go:241] * 
	* 
	W1117 12:23:23.624549   13172 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:23:23.701755   13172 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:88: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-20211117122227-2067 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20211117122227-2067 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:105: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-20211117122227-2067 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (342.309301ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-20211117122227-2067 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:101: *** TestForceSystemdFlag FAILED at 2021-11-17 12:23:24.074986 -0800 PST m=+1999.806576299
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20211117122227-2067
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-20211117122227-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-20211117122227-2067",
	        "Id": "a063c8bebd9bd60bca509e5d487f00d8eba724d8c83d6a68e583d9e50782e53a",
	        "Created": "2021-11-17T20:23:09.208520809Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-20211117122227-2067 -n force-systemd-flag-20211117122227-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-20211117122227-2067 -n force-systemd-flag-20211117122227-2067: exit status 7 (210.566518ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:23:24.375673   13634 status.go:247] status error: host: state: unknown state "force-systemd-flag-20211117122227-2067": docker container inspect force-systemd-flag-20211117122227-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117122227-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-20211117122227-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-20211117122227-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20211117122227-2067

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20211117122227-2067: (6.088996823s)
--- FAIL: TestForceSystemdFlag (62.70s)

                                                
                                    
x
+
TestForceSystemdEnv (63.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20211117122336-2067 --memory=2048 --alsologtostderr -v=5 --driver=docker 
* minikube v1.24.0 on darwin
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.2.0-to-current546002801
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.2.0-to-current546002801/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.2.0-to-current546002801/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.2.0-to-current546002801/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-20211117122336-2067 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 80 (50.647101062s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20211117122336-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Starting control plane node force-systemd-env-20211117122336-2067 in cluster force-systemd-env-20211117122336-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-20211117122336-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:23:36.598538   13723 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:23:36.598737   13723 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:23:36.598742   13723 out.go:310] Setting ErrFile to fd 2...
	I1117 12:23:36.598745   13723 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:23:36.598810   13723 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:23:36.599121   13723 out.go:304] Setting JSON to false
	I1117 12:23:36.624520   13723 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3191,"bootTime":1637177425,"procs":320,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:23:36.624624   13723 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:23:36.671222   13723 out.go:176] * [force-systemd-env-20211117122336-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:23:36.671328   13723 notify.go:174] Checking for updates...
	I1117 12:23:36.719023   13723 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:23:36.746003   13723 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:23:36.776025   13723 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:23:36.841932   13723 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:23:36.865741   13723 out.go:176]   - MINIKUBE_FORCE_SYSTEMD=true
	I1117 12:23:36.866284   13723 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:23:36.866331   13723 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:23:36.963615   13723 docker.go:132] docker version: linux-20.10.5
	I1117 12:23:36.963789   13723 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:23:37.168283   13723 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:23:37.110279551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:23:37.224810   13723 out.go:176] * Using the docker driver based on user configuration
	I1117 12:23:37.224894   13723 start.go:280] selected driver: docker
	I1117 12:23:37.224917   13723 start.go:775] validating driver "docker" against <nil>
	I1117 12:23:37.224945   13723 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:23:37.228522   13723 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:23:37.394726   13723 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:23:37.34895546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:23:37.394811   13723 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:23:37.394934   13723 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 12:23:37.394950   13723 cni.go:93] Creating CNI manager for ""
	I1117 12:23:37.394957   13723 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:23:37.394963   13723 start_flags.go:282] config:
	{Name:force-systemd-env-20211117122336-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-env-20211117122336-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:23:37.453757   13723 out.go:176] * Starting control plane node force-systemd-env-20211117122336-2067 in cluster force-systemd-env-20211117122336-2067
	I1117 12:23:37.453846   13723 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:23:37.501945   13723 out.go:176] * Pulling base image ...
	I1117 12:23:37.502019   13723 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:23:37.502084   13723 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:23:37.502097   13723 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:23:37.502124   13723 cache.go:57] Caching tarball of preloaded images
	I1117 12:23:37.502452   13723 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:23:37.502483   13723 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:23:37.503649   13723 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/force-systemd-env-20211117122336-2067/config.json ...
	I1117 12:23:37.503842   13723 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/force-systemd-env-20211117122336-2067/config.json: {Name:mkd38bc9c6292fe84040d4526abb871d8ce289bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:23:37.675622   13723 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:23:37.675643   13723 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:23:37.675671   13723 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:23:37.675739   13723 start.go:313] acquiring machines lock for force-systemd-env-20211117122336-2067: {Name:mk8407ea11ddcb4d0b5ae0da16b8f6026ffd568d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:23:37.675924   13723 start.go:317] acquired machines lock for "force-systemd-env-20211117122336-2067" in 169.778µs
	I1117 12:23:37.675976   13723 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20211117122336-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-env-20211117122336-2067 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:23:37.676105   13723 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:23:37.723901   13723 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:23:37.724294   13723 start.go:160] libmachine.API.Create for "force-systemd-env-20211117122336-2067" (driver="docker")
	I1117 12:23:37.724373   13723 client.go:168] LocalClient.Create starting
	I1117 12:23:37.724585   13723 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:23:37.724669   13723 main.go:130] libmachine: Decoding PEM data...
	I1117 12:23:37.724737   13723 main.go:130] libmachine: Parsing certificate...
	I1117 12:23:37.724852   13723 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:23:37.724910   13723 main.go:130] libmachine: Decoding PEM data...
	I1117 12:23:37.724926   13723 main.go:130] libmachine: Parsing certificate...
	I1117 12:23:37.725860   13723 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117122336-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:23:37.837958   13723 cli_runner.go:162] docker network inspect force-systemd-env-20211117122336-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:23:37.838091   13723 network_create.go:254] running [docker network inspect force-systemd-env-20211117122336-2067] to gather additional debugging logs...
	I1117 12:23:37.838108   13723 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117122336-2067
	W1117 12:23:37.950829   13723 cli_runner.go:162] docker network inspect force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:23:37.950855   13723 network_create.go:257] error running [docker network inspect force-systemd-env-20211117122336-2067]: docker network inspect force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20211117122336-2067
	I1117 12:23:37.950875   13723 network_create.go:259] output of [docker network inspect force-systemd-env-20211117122336-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20211117122336-2067
	
	** /stderr **
	I1117 12:23:37.950973   13723 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:23:38.063959   13723 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e298] misses:0}
	I1117 12:23:38.063997   13723 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:23:38.064018   13723 network_create.go:106] attempt to create docker network force-systemd-env-20211117122336-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:23:38.064097   13723 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117122336-2067
	I1117 12:23:43.740619   13723 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117122336-2067: (5.676518465s)
	I1117 12:23:43.740644   13723 network_create.go:90] docker network force-systemd-env-20211117122336-2067 192.168.49.0/24 created
	I1117 12:23:43.740660   13723 kic.go:106] calculated static IP "192.168.49.2" for the "force-systemd-env-20211117122336-2067" container
	I1117 12:23:43.740776   13723 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:23:43.841093   13723 cli_runner.go:115] Run: docker volume create force-systemd-env-20211117122336-2067 --label name.minikube.sigs.k8s.io=force-systemd-env-20211117122336-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:23:43.944747   13723 oci.go:102] Successfully created a docker volume force-systemd-env-20211117122336-2067
	I1117 12:23:43.944884   13723 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20211117122336-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211117122336-2067 --entrypoint /usr/bin/test -v force-systemd-env-20211117122336-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:23:44.438747   13723 oci.go:106] Successfully prepared a docker volume force-systemd-env-20211117122336-2067
	I1117 12:23:44.438803   13723 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	E1117 12:23:44.438804   13723 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:23:44.438822   13723 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:23:44.438828   13723 client.go:171] LocalClient.Create took 6.714481621s
	I1117 12:23:44.438927   13723 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117122336-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:23:46.439341   13723 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:23:46.439462   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:23:46.564047   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:23:46.589429   13723 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:46.867769   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:23:46.988798   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:23:46.988892   13723 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:47.537381   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:23:47.656059   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:23:47.656164   13723 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:48.312022   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:23:48.429446   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	W1117 12:23:48.429536   13723 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	
	W1117 12:23:48.429558   13723 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:48.429573   13723 start.go:129] duration metric: createHost completed in 10.753529433s
	I1117 12:23:48.429579   13723 start.go:80] releasing machines lock for "force-systemd-env-20211117122336-2067", held for 10.75371264s
	W1117 12:23:48.429595   13723 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:23:48.430210   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:48.556464   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:48.556509   13723 delete.go:82] Unable to get host status for force-systemd-env-20211117122336-2067, assuming it has already been deleted: state: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	W1117 12:23:48.556652   13723 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:23:48.556665   13723 start.go:547] Will try again in 5 seconds ...
	I1117 12:23:50.486853   13723 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117122336-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.047900074s)
	I1117 12:23:50.486868   13723 kic.go:188] duration metric: took 6.048086 seconds to extract preloaded images to volume
	I1117 12:23:53.557480   13723 start.go:313] acquiring machines lock for force-systemd-env-20211117122336-2067: {Name:mk8407ea11ddcb4d0b5ae0da16b8f6026ffd568d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:23:53.557599   13723 start.go:317] acquired machines lock for "force-systemd-env-20211117122336-2067" in 98.469µs
	I1117 12:23:53.557645   13723 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:23:53.557654   13723 fix.go:55] fixHost starting: 
	I1117 12:23:53.557934   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:53.674305   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:53.674355   13723 fix.go:108] recreateIfNeeded on force-systemd-env-20211117122336-2067: state= err=unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:53.674373   13723 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:23:53.702909   13723 out.go:176] * docker "force-systemd-env-20211117122336-2067" container is missing, will recreate.
	I1117 12:23:53.702989   13723 delete.go:124] DEMOLISHING force-systemd-env-20211117122336-2067 ...
	I1117 12:23:53.703173   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:53.849641   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:23:53.849681   13723 stop.go:75] unable to get state: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:53.849710   13723 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:53.850140   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:53.971611   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:53.971699   13723 delete.go:82] Unable to get host status for force-systemd-env-20211117122336-2067, assuming it has already been deleted: state: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:53.971850   13723 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-env-20211117122336-2067
	W1117 12:23:54.096435   13723 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:23:54.096463   13723 kic.go:360] could not find the container force-systemd-env-20211117122336-2067 to remove it. will try anyways
	I1117 12:23:54.096554   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:54.217992   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:23:54.218054   13723 oci.go:83] error getting container status, will try to delete anyways: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:54.218224   13723 cli_runner.go:115] Run: docker exec --privileged -t force-systemd-env-20211117122336-2067 /bin/bash -c "sudo init 0"
	W1117 12:23:54.362109   13723 cli_runner.go:162] docker exec --privileged -t force-systemd-env-20211117122336-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:23:54.362136   13723 oci.go:656] error shutdown force-systemd-env-20211117122336-2067: docker exec --privileged -t force-systemd-env-20211117122336-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:55.362532   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:55.479141   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:55.479183   13723 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:55.479203   13723 oci.go:670] temporary error: container force-systemd-env-20211117122336-2067 status is  but expect it to be exited
	I1117 12:23:55.479228   13723 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:55.943256   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:56.085388   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:56.085447   13723 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:56.085465   13723 oci.go:670] temporary error: container force-systemd-env-20211117122336-2067 status is  but expect it to be exited
	I1117 12:23:56.085490   13723 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:56.979249   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:57.082029   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:57.082078   13723 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:57.082087   13723 oci.go:670] temporary error: container force-systemd-env-20211117122336-2067 status is  but expect it to be exited
	I1117 12:23:57.082111   13723 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:57.719147   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:57.823989   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:57.824029   13723 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:57.824038   13723 oci.go:670] temporary error: container force-systemd-env-20211117122336-2067 status is  but expect it to be exited
	I1117 12:23:57.824059   13723 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:58.938755   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:23:59.041123   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:23:59.041161   13723 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:23:59.041178   13723 oci.go:670] temporary error: container force-systemd-env-20211117122336-2067 status is  but expect it to be exited
	I1117 12:23:59.041206   13723 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:00.554476   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:24:00.661942   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:00.661992   13723 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:00.662010   13723 oci.go:670] temporary error: container force-systemd-env-20211117122336-2067 status is  but expect it to be exited
	I1117 12:24:00.662045   13723 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:03.704465   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:24:03.809871   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:03.809918   13723 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:03.809928   13723 oci.go:670] temporary error: container force-systemd-env-20211117122336-2067 status is  but expect it to be exited
	I1117 12:24:03.809956   13723 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:09.602306   13723 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}
	W1117 12:24:09.705636   13723 cli_runner.go:162] docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:24:09.705684   13723 oci.go:668] temporary error verifying shutdown: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:09.705693   13723 oci.go:670] temporary error: container force-systemd-env-20211117122336-2067 status is  but expect it to be exited
	I1117 12:24:09.705726   13723 oci.go:87] couldn't shut down force-systemd-env-20211117122336-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	 
	I1117 12:24:09.705844   13723 cli_runner.go:115] Run: docker rm -f -v force-systemd-env-20211117122336-2067
	I1117 12:24:09.806774   13723 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-env-20211117122336-2067
	W1117 12:24:09.906998   13723 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:24:09.907117   13723 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117122336-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:24:10.007228   13723 cli_runner.go:115] Run: docker network rm force-systemd-env-20211117122336-2067
	I1117 12:24:13.461092   13723 cli_runner.go:168] Completed: docker network rm force-systemd-env-20211117122336-2067: (3.45382025s)
	W1117 12:24:13.461388   13723 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:24:13.461405   13723 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:24:14.464144   13723 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:24:14.491418   13723 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:24:14.491624   13723 start.go:160] libmachine.API.Create for "force-systemd-env-20211117122336-2067" (driver="docker")
	I1117 12:24:14.491669   13723 client.go:168] LocalClient.Create starting
	I1117 12:24:14.491836   13723 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:24:14.491923   13723 main.go:130] libmachine: Decoding PEM data...
	I1117 12:24:14.491950   13723 main.go:130] libmachine: Parsing certificate...
	I1117 12:24:14.492043   13723 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:24:14.512978   13723 main.go:130] libmachine: Decoding PEM data...
	I1117 12:24:14.513040   13723 main.go:130] libmachine: Parsing certificate...
	I1117 12:24:14.514066   13723 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117122336-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:24:14.616745   13723 cli_runner.go:162] docker network inspect force-systemd-env-20211117122336-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:24:14.616845   13723 network_create.go:254] running [docker network inspect force-systemd-env-20211117122336-2067] to gather additional debugging logs...
	I1117 12:24:14.616861   13723 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117122336-2067
	W1117 12:24:14.717832   13723 cli_runner.go:162] docker network inspect force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:24:14.717863   13723 network_create.go:257] error running [docker network inspect force-systemd-env-20211117122336-2067]: docker network inspect force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20211117122336-2067
	I1117 12:24:14.717876   13723 network_create.go:259] output of [docker network inspect force-systemd-env-20211117122336-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20211117122336-2067
	
	** /stderr **
	I1117 12:24:14.717979   13723 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:24:14.818949   13723 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e298] amended:false}} dirty:map[] misses:0}
	I1117 12:24:14.818987   13723 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:24:14.819197   13723 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e298] amended:true}} dirty:map[192.168.49.0:0xc00000e298 192.168.58.0:0xc000a902d0] misses:0}
	I1117 12:24:14.819211   13723 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:24:14.819218   13723 network_create.go:106] attempt to create docker network force-systemd-env-20211117122336-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:24:14.819312   13723 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117122336-2067
	W1117 12:24:14.918794   13723 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117122336-2067 returned with exit code 1
	W1117 12:24:14.918842   13723 network_create.go:98] failed to create docker network force-systemd-env-20211117122336-2067 192.168.58.0/24, will retry: subnet is taken
	I1117 12:24:14.919058   13723 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e298] amended:true}} dirty:map[192.168.49.0:0xc00000e298 192.168.58.0:0xc000a902d0] misses:1}
	I1117 12:24:14.919079   13723 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:24:14.919263   13723 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e298] amended:true}} dirty:map[192.168.49.0:0xc00000e298 192.168.58.0:0xc000a902d0 192.168.67.0:0xc00000e420] misses:1}
	I1117 12:24:14.919274   13723 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:24:14.919283   13723 network_create.go:106] attempt to create docker network force-systemd-env-20211117122336-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:24:14.919369   13723 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117122336-2067
	I1117 12:24:20.492207   13723 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117122336-2067: (5.572818796s)
	I1117 12:24:20.492228   13723 network_create.go:90] docker network force-systemd-env-20211117122336-2067 192.168.67.0/24 created
	I1117 12:24:20.492241   13723 kic.go:106] calculated static IP "192.168.67.2" for the "force-systemd-env-20211117122336-2067" container
	I1117 12:24:20.492355   13723 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:24:20.592313   13723 cli_runner.go:115] Run: docker volume create force-systemd-env-20211117122336-2067 --label name.minikube.sigs.k8s.io=force-systemd-env-20211117122336-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:24:20.692578   13723 oci.go:102] Successfully created a docker volume force-systemd-env-20211117122336-2067
	I1117 12:24:20.692710   13723 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20211117122336-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211117122336-2067 --entrypoint /usr/bin/test -v force-systemd-env-20211117122336-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:24:21.086656   13723 oci.go:106] Successfully prepared a docker volume force-systemd-env-20211117122336-2067
	E1117 12:24:21.086707   13723 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:24:21.086718   13723 client.go:171] LocalClient.Create took 6.595082857s
	I1117 12:24:21.086744   13723 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:24:21.086763   13723 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:24:21.086884   13723 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117122336-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:24:23.086957   13723 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:24:23.087043   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:24:23.208575   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:24:23.208675   13723 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:23.387879   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:24:23.514229   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:24:23.514309   13723 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:23.854111   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:24:23.972388   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:24:23.972473   13723 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:24.433950   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:24:25.392491   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	W1117 12:24:25.392586   13723 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	
	W1117 12:24:25.392619   13723 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:25.392643   13723 start.go:129] duration metric: createHost completed in 10.928521121s
	I1117 12:24:25.392727   13723 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:24:25.392796   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:24:25.513597   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:24:25.513681   13723 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:25.709755   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:24:25.833031   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:24:25.833110   13723 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:26.131233   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:24:26.254880   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	I1117 12:24:26.254980   13723 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:26.926276   13723 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067
	W1117 12:24:27.050978   13723 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067 returned with exit code 1
	W1117 12:24:27.051148   13723 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	
	W1117 12:24:27.051164   13723 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117122336-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117122336-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	I1117 12:24:27.051173   13723 fix.go:57] fixHost completed within 33.493727664s
	I1117 12:24:27.051181   13723 start.go:80] releasing machines lock for "force-systemd-env-20211117122336-2067", held for 33.493780837s
	W1117 12:24:27.051323   13723 out.go:241] * Failed to start docker container. Running "minikube delete -p force-systemd-env-20211117122336-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-20211117122336-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:24:27.097786   13723 out.go:176] 
	W1117 12:24:27.097895   13723 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:24:27.097903   13723 out.go:241] * 
	* 
	W1117 12:24:27.098645   13723 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:24:27.179997   13723 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:153: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-20211117122336-2067 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20211117122336-2067 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:105: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-20211117122336-2067 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (247.659775ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-20211117122336-2067 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:162: *** TestForceSystemdEnv FAILED at 2021-11-17 12:24:27.460014 -0800 PST m=+2063.191998953
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20211117122336-2067
helpers_test.go:235: (dbg) docker inspect force-systemd-env-20211117122336-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-20211117122336-2067",
	        "Id": "9bc8cbcd39dc593b9a13cdb26cd8ee27710a61e94fef77f9b7a1d02093469e87",
	        "Created": "2021-11-17T20:24:15.035876393Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-20211117122336-2067 -n force-systemd-env-20211117122336-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-20211117122336-2067 -n force-systemd-env-20211117122336-2067: exit status 7 (206.941579ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:24:27.794896   14135 status.go:247] status error: host: state: unknown state "force-systemd-env-20211117122336-2067": docker container inspect force-systemd-env-20211117122336-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117122336-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-20211117122336-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-20211117122336-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20211117122336-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20211117122336-2067: (11.931890012s)
--- FAIL: TestForceSystemdEnv (63.17s)

                                                
                                    
x
+
TestErrorSpam/setup (44.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20211117115142-2067 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 --driver=docker 
error_spam_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p nospam-20211117115142-2067 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 --driver=docker : exit status 80 (44.854238969s)

                                                
                                                
-- stdout --
	* [nospam-20211117115142-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node nospam-20211117115142-2067 in cluster nospam-20211117115142-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	* docker "nospam-20211117115142-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:51:48.482436    2601 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 11:52:22.308409    2601 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p nospam-20211117115142-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:81: "out/minikube-darwin-amd64 start -p nospam-20211117115142-2067 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 --driver=docker " failed: exit status 80
error_spam_test.go:94: unexpected stderr: "E1117 11:51:48.482436    2601 oci.go:173] error getting kernel modules path: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "E1117 11:52:22.308409    2601 oci.go:173] error getting kernel modules path: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "* Failed to start docker container. Running \"minikube delete -p nospam-20211117115142-2067\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "* "
error_spam_test.go:94: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:94: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:108: minikube stdout:
* [nospam-20211117115142-2067] minikube v1.24.0 on Darwin 11.1
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
* Using the docker driver based on user configuration
* Starting control plane node nospam-20211117115142-2067 in cluster nospam-20211117115142-2067
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* docker "nospam-20211117115142-2067" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2250MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:109: minikube stderr:
E1117 11:51:48.482436    2601 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
E1117 11:52:22.308409    2601 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
* Failed to start docker container. Running "minikube delete -p nospam-20211117115142-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:119: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:119: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:119: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (44.86s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2015: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2015: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : exit status 80 (44.832904715s)

                                                
                                                
-- stdout --
	* [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node functional-20211117115319-2067 in cluster functional-20211117115319-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117115319-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51119 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51119 to docker env.
	E1117 11:53:25.692400    3066 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51119 to docker env.
	E1117 11:53:59.461737    3066 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117115319-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2017: failed minikube start. args "out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker ": exit status 80
functional_test.go:2022: start stdout=* [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
* Using the docker driver based on user configuration
* Starting control plane node functional-20211117115319-2067 in cluster functional-20211117115319-2067
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* docker "functional-20211117115319-2067" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=4000MB) ...

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2027: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51119 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51119 to docker env.
E1117 11:53:25.692400    3066 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
! Local proxy ignored: not passing HTTP_PROXY=localhost:51119 to docker env.
E1117 11:53:59.461737    3066 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
* Failed to start docker container. Running "minikube delete -p functional-20211117115319-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "61c13ae165f27b4be1fa85df489f7f764caf18a615778ce8ae68a2884471563a",
	        "Created": "2021-11-17T19:53:55.115954364Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (150.467149ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:54:05.523277    3291 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/StartWithProxy (45.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
functional_test.go:579: audit.json does not contain the profile "functional-20211117115319-2067"
--- FAIL: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (68.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:600: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --alsologtostderr -v=8
functional_test.go:600: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --alsologtostderr -v=8: exit status 80 (1m8.547410219s)

                                                
                                                
-- stdout --
	* [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20211117115319-2067 in cluster functional-20211117115319-2067
	* Pulling base image ...
	* docker "functional-20211117115319-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117115319-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:54:05.564855    3296 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:54:05.565007    3296 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:54:05.565012    3296 out.go:310] Setting ErrFile to fd 2...
	I1117 11:54:05.565015    3296 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:54:05.565108    3296 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:54:05.565382    3296 out.go:304] Setting JSON to false
	I1117 11:54:05.590297    3296 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1420,"bootTime":1637177425,"procs":316,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 11:54:05.590482    3296 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 11:54:05.626714    3296 out.go:176] * [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
	I1117 11:54:05.626908    3296 notify.go:174] Checking for updates...
	I1117 11:54:05.704287    3296 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 11:54:05.730512    3296 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 11:54:05.756409    3296 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 11:54:05.782252    3296 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 11:54:05.782594    3296 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 11:54:05.782631    3296 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 11:54:05.871099    3296 docker.go:132] docker version: linux-20.10.5
	I1117 11:54:05.871222    3296 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:54:06.019577    3296 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 19:54:05.978272033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:54:06.046331    3296 out.go:176] * Using the docker driver based on existing profile
	I1117 11:54:06.046357    3296 start.go:280] selected driver: docker
	I1117 11:54:06.046363    3296 start.go:775] validating driver "docker" against &{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:54:06.046426    3296 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 11:54:06.046635    3296 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:54:06.194368    3296 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 19:54:06.152973452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:54:06.196352    3296 cni.go:93] Creating CNI manager for ""
	I1117 11:54:06.196372    3296 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 11:54:06.196383    3296 start_flags.go:282] config:
	{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:54:06.223316    3296 out.go:176] * Starting control plane node functional-20211117115319-2067 in cluster functional-20211117115319-2067
	I1117 11:54:06.223342    3296 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 11:54:06.270083    3296 out.go:176] * Pulling base image ...
	I1117 11:54:06.270165    3296 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:54:06.270239    3296 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 11:54:06.270244    3296 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 11:54:06.270270    3296 cache.go:57] Caching tarball of preloaded images
	I1117 11:54:06.271095    3296 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 11:54:06.271368    3296 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 11:54:06.271777    3296 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/functional-20211117115319-2067/config.json ...
	I1117 11:54:06.385691    3296 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 11:54:06.385703    3296 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 11:54:06.385716    3296 cache.go:206] Successfully downloaded all kic artifacts
	I1117 11:54:06.385778    3296 start.go:313] acquiring machines lock for functional-20211117115319-2067: {Name:mk4569454e13da3fe88fd1d74c9c9e521ae0a801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:54:06.385870    3296 start.go:317] acquired machines lock for "functional-20211117115319-2067" in 71.87µs
	I1117 11:54:06.385892    3296 start.go:93] Skipping create...Using existing machine configuration
	I1117 11:54:06.385903    3296 fix.go:55] fixHost starting: 
	I1117 11:54:06.386165    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:06.484727    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:06.484781    3296 fix.go:108] recreateIfNeeded on functional-20211117115319-2067: state= err=unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:06.484804    3296 fix.go:113] machineExists: false. err=machine does not exist
	I1117 11:54:06.511656    3296 out.go:176] * docker "functional-20211117115319-2067" container is missing, will recreate.
	I1117 11:54:06.511699    3296 delete.go:124] DEMOLISHING functional-20211117115319-2067 ...
	I1117 11:54:06.512005    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:06.611410    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:54:06.611455    3296 stop.go:75] unable to get state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:06.611469    3296 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:06.611861    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:06.710406    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:06.710458    3296 delete.go:82] Unable to get host status for functional-20211117115319-2067, assuming it has already been deleted: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:06.710566    3296 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
	W1117 11:54:06.808356    3296 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:06.808382    3296 kic.go:360] could not find the container functional-20211117115319-2067 to remove it. will try anyways
	I1117 11:54:06.808461    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:06.906653    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:54:06.906701    3296 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:06.906791    3296 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0"
	W1117 11:54:07.003495    3296 cli_runner.go:162] docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 11:54:07.003521    3296 oci.go:656] error shutdown functional-20211117115319-2067: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:08.011057    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:08.110491    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:08.110532    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:08.110540    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:08.110570    3296 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:08.669601    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:08.771352    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:08.771390    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:08.771400    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:08.771419    3296 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:09.853704    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:09.953268    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:09.953314    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:09.953323    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:09.953344    3296 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:11.272507    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:11.374456    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:11.374499    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:11.374507    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:11.374532    3296 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:12.963480    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:13.070966    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:13.071011    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:13.071033    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:13.071059    3296 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:15.418626    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:15.515930    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:15.515974    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:15.515983    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:15.516007    3296 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:20.026267    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:20.125516    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:20.125554    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:20.125563    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:20.125585    3296 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:23.353043    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:23.453031    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:23.453068    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:23.453076    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:23.453105    3296 oci.go:87] couldn't shut down functional-20211117115319-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	 
	I1117 11:54:23.453186    3296 cli_runner.go:115] Run: docker rm -f -v functional-20211117115319-2067
	I1117 11:54:23.549488    3296 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
	W1117 11:54:23.644145    3296 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:23.644272    3296 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:54:23.740429    3296 cli_runner.go:115] Run: docker network rm functional-20211117115319-2067
	I1117 11:54:26.417982    3296 cli_runner.go:168] Completed: docker network rm functional-20211117115319-2067: (2.677523249s)
	W1117 11:54:26.418260    3296 delete.go:139] delete failed (probably ok) <nil>
	I1117 11:54:26.418267    3296 fix.go:120] Sleeping 1 second for extra luck!
	I1117 11:54:27.419340    3296 start.go:126] createHost starting for "" (driver="docker")
	I1117 11:54:27.446610    3296 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 11:54:27.446798    3296 start.go:160] libmachine.API.Create for "functional-20211117115319-2067" (driver="docker")
	I1117 11:54:27.446839    3296 client.go:168] LocalClient.Create starting
	I1117 11:54:27.447003    3296 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 11:54:27.447100    3296 main.go:130] libmachine: Decoding PEM data...
	I1117 11:54:27.447132    3296 main.go:130] libmachine: Parsing certificate...
	I1117 11:54:27.447285    3296 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 11:54:27.468276    3296 main.go:130] libmachine: Decoding PEM data...
	I1117 11:54:27.468372    3296 main.go:130] libmachine: Parsing certificate...
	I1117 11:54:27.469495    3296 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 11:54:27.570482    3296 cli_runner.go:162] docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 11:54:27.570589    3296 network_create.go:254] running [docker network inspect functional-20211117115319-2067] to gather additional debugging logs...
	I1117 11:54:27.570608    3296 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067
	W1117 11:54:27.666045    3296 cli_runner.go:162] docker network inspect functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:27.666069    3296 network_create.go:257] error running [docker network inspect functional-20211117115319-2067]: docker network inspect functional-20211117115319-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20211117115319-2067
	I1117 11:54:27.666079    3296 network_create.go:259] output of [docker network inspect functional-20211117115319-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20211117115319-2067
	
	** /stderr **
	I1117 11:54:27.666173    3296 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:54:27.763749    3296 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000112ab0] misses:0}
	I1117 11:54:27.763793    3296 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:54:27.763814    3296 network_create.go:106] attempt to create docker network functional-20211117115319-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 11:54:27.763905    3296 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067
	I1117 11:54:31.689596    3296 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067: (3.925678755s)
	I1117 11:54:31.689619    3296 network_create.go:90] docker network functional-20211117115319-2067 192.168.49.0/24 created
	I1117 11:54:31.689634    3296 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117115319-2067" container
	I1117 11:54:31.689746    3296 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 11:54:31.784307    3296 cli_runner.go:115] Run: docker volume create functional-20211117115319-2067 --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 11:54:31.880617    3296 oci.go:102] Successfully created a docker volume functional-20211117115319-2067
	I1117 11:54:31.880742    3296 cli_runner.go:115] Run: docker run --rm --name functional-20211117115319-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --entrypoint /usr/bin/test -v functional-20211117115319-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 11:54:32.293686    3296 oci.go:106] Successfully prepared a docker volume functional-20211117115319-2067
	E1117 11:54:32.293742    3296 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 11:54:32.293752    3296 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:54:32.293759    3296 client.go:171] LocalClient.Create took 4.846949723s
	I1117 11:54:32.293771    3296 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 11:54:32.293875    3296 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 11:54:34.298565    3296 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:54:34.298649    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:54:34.439995    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:34.440143    3296 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:34.599248    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:54:34.715647    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:34.715737    3296 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:35.020169    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:54:35.130108    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:35.130197    3296 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:35.710225    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:54:35.824393    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	W1117 11:54:35.824507    3296 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:54:35.824527    3296 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:35.824540    3296 start.go:129] duration metric: createHost completed in 8.405219001s
	I1117 11:54:35.824628    3296 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:54:35.824700    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:54:35.940359    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:35.940477    3296 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:36.119485    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:54:36.248102    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:36.248253    3296 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:36.579063    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:54:36.695698    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:36.695784    3296 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:37.163981    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:54:37.279863    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	W1117 11:54:37.279958    3296 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:54:37.279971    3296 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:37.279985    3296 fix.go:57] fixHost completed within 30.894316489s
	I1117 11:54:37.279993    3296 start.go:80] releasing machines lock for "functional-20211117115319-2067", held for 30.894346806s
	W1117 11:54:37.280010    3296 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 11:54:37.280151    3296 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:54:37.280159    3296 start.go:547] Will try again in 5 seconds ...
	I1117 11:54:38.230675    3296 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.936824512s)
	I1117 11:54:38.230692    3296 kic.go:188] duration metric: took 5.936966 seconds to extract preloaded images to volume
	I1117 11:54:42.283467    3296 start.go:313] acquiring machines lock for functional-20211117115319-2067: {Name:mk4569454e13da3fe88fd1d74c9c9e521ae0a801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:54:42.283625    3296 start.go:317] acquired machines lock for "functional-20211117115319-2067" in 126.044µs
	I1117 11:54:42.283682    3296 start.go:93] Skipping create...Using existing machine configuration
	I1117 11:54:42.283694    3296 fix.go:55] fixHost starting: 
	I1117 11:54:42.284185    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:42.382332    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:42.382371    3296 fix.go:108] recreateIfNeeded on functional-20211117115319-2067: state= err=unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:42.382380    3296 fix.go:113] machineExists: false. err=machine does not exist
	I1117 11:54:42.409154    3296 out.go:176] * docker "functional-20211117115319-2067" container is missing, will recreate.
	I1117 11:54:42.409188    3296 delete.go:124] DEMOLISHING functional-20211117115319-2067 ...
	I1117 11:54:42.409412    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:42.506942    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:54:42.506984    3296 stop.go:75] unable to get state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:42.507002    3296 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:42.507424    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:42.605469    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:42.605512    3296 delete.go:82] Unable to get host status for functional-20211117115319-2067, assuming it has already been deleted: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:42.605619    3296 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
	W1117 11:54:42.703653    3296 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:42.703679    3296 kic.go:360] could not find the container functional-20211117115319-2067 to remove it. will try anyways
	I1117 11:54:42.703771    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:42.801536    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:54:42.801580    3296 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:42.801669    3296 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0"
	W1117 11:54:42.900286    3296 cli_runner.go:162] docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 11:54:42.900317    3296 oci.go:656] error shutdown functional-20211117115319-2067: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:43.900582    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:43.998932    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:43.998975    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:43.998987    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:43.999006    3296 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:44.398654    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:44.496940    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:44.496979    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:44.496987    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:44.497007    3296 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:45.098577    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:45.198453    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:45.198494    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:45.198503    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:45.198524    3296 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:46.526040    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:46.626225    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:46.626272    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:46.626281    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:46.626299    3296 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:47.848618    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:47.949811    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:47.949855    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:47.949873    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:47.949891    3296 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:49.733334    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:49.832934    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:49.832983    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:49.832992    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:49.833013    3296 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:53.107145    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:53.204311    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:53.204352    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:53.204371    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:53.204391    3296 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:59.303260    3296 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:54:59.401878    3296 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:54:59.401918    3296 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:54:59.401926    3296 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:54:59.401950    3296 oci.go:87] couldn't shut down functional-20211117115319-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	 
	I1117 11:54:59.402036    3296 cli_runner.go:115] Run: docker rm -f -v functional-20211117115319-2067
	I1117 11:54:59.498788    3296 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
	W1117 11:54:59.593884    3296 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
	I1117 11:54:59.593994    3296 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:54:59.689360    3296 cli_runner.go:115] Run: docker network rm functional-20211117115319-2067
	I1117 11:55:02.563412    3296 cli_runner.go:168] Completed: docker network rm functional-20211117115319-2067: (2.874019756s)
	W1117 11:55:02.563682    3296 delete.go:139] delete failed (probably ok) <nil>
	I1117 11:55:02.563689    3296 fix.go:120] Sleeping 1 second for extra luck!
	I1117 11:55:03.573815    3296 start.go:126] createHost starting for "" (driver="docker")
	I1117 11:55:03.601190    3296 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 11:55:03.601394    3296 start.go:160] libmachine.API.Create for "functional-20211117115319-2067" (driver="docker")
	I1117 11:55:03.601429    3296 client.go:168] LocalClient.Create starting
	I1117 11:55:03.601645    3296 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 11:55:03.601736    3296 main.go:130] libmachine: Decoding PEM data...
	I1117 11:55:03.601769    3296 main.go:130] libmachine: Parsing certificate...
	I1117 11:55:03.601864    3296 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 11:55:03.601919    3296 main.go:130] libmachine: Decoding PEM data...
	I1117 11:55:03.601939    3296 main.go:130] libmachine: Parsing certificate...
	I1117 11:55:03.602919    3296 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 11:55:03.699665    3296 cli_runner.go:162] docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 11:55:03.699774    3296 network_create.go:254] running [docker network inspect functional-20211117115319-2067] to gather additional debugging logs...
	I1117 11:55:03.699792    3296 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067
	W1117 11:55:03.796520    3296 cli_runner.go:162] docker network inspect functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:03.796544    3296 network_create.go:257] error running [docker network inspect functional-20211117115319-2067]: docker network inspect functional-20211117115319-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20211117115319-2067
	I1117 11:55:03.796557    3296 network_create.go:259] output of [docker network inspect functional-20211117115319-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20211117115319-2067
	
	** /stderr **
	I1117 11:55:03.796655    3296 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:55:03.892690    3296 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000112ab0] amended:false}} dirty:map[] misses:0}
	I1117 11:55:03.892723    3296 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:55:03.892917    3296 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000112ab0] amended:true}} dirty:map[192.168.49.0:0xc000112ab0 192.168.58.0:0xc000112570] misses:0}
	I1117 11:55:03.892929    3296 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:55:03.892936    3296 network_create.go:106] attempt to create docker network functional-20211117115319-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 11:55:03.893021    3296 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067
	I1117 11:55:07.767397    3296 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067: (3.874361334s)
	I1117 11:55:07.767417    3296 network_create.go:90] docker network functional-20211117115319-2067 192.168.58.0/24 created
	I1117 11:55:07.767428    3296 kic.go:106] calculated static IP "192.168.58.2" for the "functional-20211117115319-2067" container
	I1117 11:55:07.767546    3296 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 11:55:07.862489    3296 cli_runner.go:115] Run: docker volume create functional-20211117115319-2067 --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 11:55:07.958351    3296 oci.go:102] Successfully created a docker volume functional-20211117115319-2067
	I1117 11:55:07.958472    3296 cli_runner.go:115] Run: docker run --rm --name functional-20211117115319-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --entrypoint /usr/bin/test -v functional-20211117115319-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 11:55:08.351069    3296 oci.go:106] Successfully prepared a docker volume functional-20211117115319-2067
	E1117 11:55:08.351113    3296 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 11:55:08.351124    3296 client.go:171] LocalClient.Create took 4.749724983s
	I1117 11:55:08.351130    3296 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:55:08.351148    3296 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 11:55:08.351254    3296 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 11:55:10.359566    3296 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:55:10.359676    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:10.496133    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:10.496285    3296 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:10.697898    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:10.810561    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:10.810643    3296 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:11.110324    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:11.224721    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:11.224802    3296 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:11.929591    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:12.047007    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	W1117 11:55:12.047091    3296 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:55:12.047106    3296 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:12.047120    3296 start.go:129] duration metric: createHost completed in 8.473319427s
	I1117 11:55:12.047184    3296 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:55:12.047262    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:12.164348    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:12.164433    3296 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:12.511842    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:12.631331    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:12.631420    3296 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:13.080554    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:13.198293    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:13.198397    3296 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:13.774601    3296 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:13.873601    3296 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	W1117 11:55:13.873685    3296 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:55:13.873715    3296 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:13.873725    3296 fix.go:57] fixHost completed within 31.59027043s
	I1117 11:55:13.873733    3296 start.go:80] releasing machines lock for "functional-20211117115319-2067", held for 31.590332295s
	W1117 11:55:13.873876    3296 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117115319-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117115319-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:55:13.960096    3296 out.go:176] 
	W1117 11:55:13.960265    3296 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 11:55:13.960280    3296 out.go:241] * 
	* 
	W1117 11:55:13.961302    3296 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 11:55:14.034908    3296 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:602: failed to soft start minikube. args "out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --alsologtostderr -v=8": exit status 80
functional_test.go:604: soft start took 1m8.559400504s for "functional-20211117115319-2067" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "38a1a6b1e4bc0c0f952a8695bb84f5a5222a49ca89d7669161ddd79a932f168d",
	        "Created": "2021-11-17T19:55:03.996686881Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (171.844925ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:55:14.375423    3604 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/SoftStart (68.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:622: (dbg) Run:  kubectl config current-context
functional_test.go:622: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (37.799842ms)

                                                
                                                
** stderr ** 
	W1117 11:55:14.417158    3609 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:624: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:628: expected current-context = "functional-20211117115319-2067", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "38a1a6b1e4bc0c0f952a8695bb84f5a5222a49ca89d7669161ddd79a932f168d",
	        "Created": "2021-11-17T19:55:03.996686881Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (141.435239ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:55:14.665124    3614 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubeContext (0.29s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:637: (dbg) Run:  kubectl --context functional-20211117115319-2067 get po -A
functional_test.go:637: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 get po -A: exit status 1 (38.192398ms)

                                                
                                                
** stderr ** 
	W1117 11:55:14.706353    3619 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test.go:639: failed to get kubectl pods: args "kubectl --context functional-20211117115319-2067 get po -A" : exit status 1
functional_test.go:643: expected stderr to be empty but got *"W1117 11:55:14.706353    3619 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig\nError in configuration: context was not found for specified context: functional-20211117115319-2067\n"*: args "kubectl --context functional-20211117115319-2067 get po -A"
functional_test.go:646: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-20211117115319-2067 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "38a1a6b1e4bc0c0f952a8695bb84f5a5222a49ca89d7669161ddd79a932f168d",
	        "Created": "2021-11-17T19:55:03.996686881Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (141.697239ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:55:14.954085    3624 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:3.1
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:3.1: exit status 10 (102.451983ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.1": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.1
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_cache_1ee7f0edc085faba6c5c2cd5567d37f230636116_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.1". args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:3.1" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:3.3
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:3.3: exit status 10 (101.7987ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.3": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.3
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_cache_de8128d312e6d2ac89c1c5074cd22b7974c28c2b_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.3". args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:3.3" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:latest
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:latest: exit status 10 (99.561096ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_latest": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:latest
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_cache_5aa7605f63066fc2b7f8379478b9def700202ac8_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:latest". args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add k8s.gcr.io/pause:latest" err exit status 10
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_remote (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3: exit status 30 (92.615971ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.3: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_cache_e17e40910561608ab15e9700ab84b4e1db856f38_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1041: failed to delete image k8s.gcr.io/pause:3.3 from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1047: (dbg) Run:  out/minikube-darwin-amd64 cache list
functional_test.go:1052: expected 'cache list' output to include 'k8s.gcr.io/pause' but got: ******
--- FAIL: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1061: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo crictl images
functional_test.go:1061: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo crictl images: exit status 80 (193.28657ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_6599ef642588877027e69d7c08a478c21d2be2a6_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1063: failed to get images by "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo crictl images" ssh exit status 80
functional_test.go:1067: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_6599ef642588877027e69d7c08a478c21d2be2a6_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo docker rmi k8s.gcr.io/pause:latest: exit status 80 (194.907863ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_f6cc923efa9cb983c5688c815b9a26138561eb5d_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to manually delete image "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo docker rmi k8s.gcr.io/pause:latest" : exit status 80
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (199.023093ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_faf3f1cd86a795397a09a2748fe4ee3bd5d83e42_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache reload
functional_test.go:1100: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1100: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (196.294938ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_faf3f1cd86a795397a09a2748fe4ee3bd5d83e42_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1102: expected "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 80
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1109: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1: exit status 30 (98.116865ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.1: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_cache_d1b33253e7334db9f364f7cea75d63fe683cad74_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:3.1 from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1": exit status 30
functional_test.go:1109: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest: exit status 30 (91.992707ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_latest: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_cache_d17bcf228b7a032ee268baa189bce7c5c7938c34_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:latest from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:657: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 kubectl -- --context functional-20211117115319-2067 get pods
functional_test.go:657: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 kubectl -- --context functional-20211117115319-2067 get pods: exit status 1 (435.152474ms)

                                                
                                                
** stderr ** 
	W1117 11:55:18.827360    3684 loader.go:221] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117115319-2067
	* no server found for cluster "functional-20211117115319-2067"

                                                
                                                
** /stderr **
functional_test.go:660: failed to get pods. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 kubectl -- --context functional-20211117115319-2067 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "38a1a6b1e4bc0c0f952a8695bb84f5a5222a49ca89d7669161ddd79a932f168d",
	        "Created": "2021-11-17T19:55:03.996686881Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (142.015953ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:55:19.072927    3689 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.68s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.76s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:682: (dbg) Run:  out/kubectl --context functional-20211117115319-2067 get pods
functional_test.go:682: (dbg) Non-zero exit: out/kubectl --context functional-20211117115319-2067 get pods: exit status 1 (510.789079ms)

                                                
                                                
** stderr ** 
	W1117 11:55:19.583188    3695 loader.go:221] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117115319-2067
	* no server found for cluster "functional-20211117115319-2067"

                                                
                                                
** /stderr **
functional_test.go:685: failed to run kubectl directly. args "out/kubectl --context functional-20211117115319-2067 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "38a1a6b1e4bc0c0f952a8695bb84f5a5222a49ca89d7669161ddd79a932f168d",
	        "Created": "2021-11-17T19:55:03.996686881Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (145.622577ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:55:19.831933    3701 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.76s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (68.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:698: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:698: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (1m8.627930001s)

                                                
                                                
-- stdout --
	* [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20211117115319-2067 in cluster functional-20211117115319-2067
	* Pulling base image ...
	* docker "functional-20211117115319-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117115319-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:55:46.796291    3706 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 11:56:22.715833    3706 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117115319-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:700: failed to restart minikube. args "out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:702: restart took 1m8.62819914s for "functional-20211117115319-2067" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (144.100514ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:28.719871    4018 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ExtraConfig (68.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:752: (dbg) Run:  kubectl --context functional-20211117115319-2067 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:752: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (41.621286ms)

                                                
                                                
** stderr ** 
	W1117 11:56:28.761432    4023 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	error: context "functional-20211117115319-2067" does not exist

                                                
                                                
** /stderr **
functional_test.go:754: failed to get components. args "kubectl --context functional-20211117115319-2067 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (140.006321ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:29.006855    4028 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ComponentHealth (0.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 logs
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 logs: exit status 80 (401.543902ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                           Args                           |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                                    | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:41 PST | Wed, 17 Nov 2021 11:50:42 PST |
	| delete  | -p                                                       | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:42 PST | Wed, 17 Nov 2021 11:50:43 PST |
	|         | download-only-20211117115004-2067                        |                                     |         |         |                               |                               |
	| delete  | -p                                                       | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:43 PST | Wed, 17 Nov 2021 11:50:43 PST |
	|         | download-only-20211117115004-2067                        |                                     |         |         |                               |                               |
	| delete  | -p                                                       | download-docker-20211117115043-2067 | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:51 PST | Wed, 17 Nov 2021 11:50:52 PST |
	|         | download-docker-20211117115043-2067                      |                                     |         |         |                               |                               |
	| delete  | -p addons-20211117115052-2067                            | addons-20211117115052-2067          | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:51:38 PST | Wed, 17 Nov 2021 11:51:42 PST |
	| delete  | -p nospam-20211117115142-2067                            | nospam-20211117115142-2067          | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:53:15 PST | Wed, 17 Nov 2021 11:53:19 PST |
	| -p      | functional-20211117115319-2067 cache add                 | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:15 PST | Wed, 17 Nov 2021 11:55:16 PST |
	|         | minikube-local-cache-test:functional-20211117115319-2067 |                                     |         |         |                               |                               |
	| -p      | functional-20211117115319-2067 cache delete              | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:16 PST | Wed, 17 Nov 2021 11:55:16 PST |
	|         | minikube-local-cache-test:functional-20211117115319-2067 |                                     |         |         |                               |                               |
	| cache   | list                                                     | minikube                            | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:17 PST | Wed, 17 Nov 2021 11:55:17 PST |
	| -p      | functional-20211117115319-2067                           | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:17 PST | Wed, 17 Nov 2021 11:55:17 PST |
	|         | cache reload                                             |                                     |         |         |                               |                               |
	|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 11:55:19
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 11:55:19.875068    3706 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:55:19.875200    3706 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:55:19.875202    3706 out.go:310] Setting ErrFile to fd 2...
	I1117 11:55:19.875204    3706 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:55:19.875285    3706 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:55:19.875547    3706 out.go:304] Setting JSON to false
	I1117 11:55:19.899880    3706 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1494,"bootTime":1637177425,"procs":316,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 11:55:19.899966    3706 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 11:55:19.927427    3706 out.go:176] * [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
	I1117 11:55:19.927639    3706 notify.go:174] Checking for updates...
	I1117 11:55:19.953781    3706 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 11:55:19.979647    3706 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 11:55:20.005754    3706 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 11:55:20.031468    3706 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 11:55:20.031825    3706 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 11:55:20.031858    3706 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 11:55:20.122494    3706 docker.go:132] docker version: linux-20.10.5
	I1117 11:55:20.122631    3706 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:55:20.269773    3706 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 19:55:20.219089817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:55:20.318504    3706 out.go:176] * Using the docker driver based on existing profile
	I1117 11:55:20.318594    3706 start.go:280] selected driver: docker
	I1117 11:55:20.318602    3706 start.go:775] validating driver "docker" against &{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:55:20.318692    3706 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 11:55:20.319077    3706 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:55:20.466790    3706 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 19:55:20.41694856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:55:20.468914    3706 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 11:55:20.468942    3706 cni.go:93] Creating CNI manager for ""
	I1117 11:55:20.468948    3706 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 11:55:20.468960    3706 start_flags.go:282] config:
	{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:55:20.496142    3706 out.go:176] * Starting control plane node functional-20211117115319-2067 in cluster functional-20211117115319-2067
	I1117 11:55:20.496233    3706 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 11:55:20.569490    3706 out.go:176] * Pulling base image ...
	I1117 11:55:20.569647    3706 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 11:55:20.569648    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:55:20.569730    3706 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 11:55:20.569753    3706 cache.go:57] Caching tarball of preloaded images
	I1117 11:55:20.570572    3706 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 11:55:20.570774    3706 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 11:55:20.571294    3706 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/functional-20211117115319-2067/config.json ...
	I1117 11:55:20.680423    3706 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 11:55:20.680436    3706 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 11:55:20.680448    3706 cache.go:206] Successfully downloaded all kic artifacts
	I1117 11:55:20.680576    3706 start.go:313] acquiring machines lock for functional-20211117115319-2067: {Name:mk4569454e13da3fe88fd1d74c9c9e521ae0a801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:55:20.680652    3706 start.go:317] acquired machines lock for "functional-20211117115319-2067" in 60.479µs
	I1117 11:55:20.680683    3706 start.go:93] Skipping create...Using existing machine configuration
	I1117 11:55:20.680691    3706 fix.go:55] fixHost starting: 
	I1117 11:55:20.680949    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:20.777934    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:20.777993    3706 fix.go:108] recreateIfNeeded on functional-20211117115319-2067: state= err=unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:20.778015    3706 fix.go:113] machineExists: false. err=machine does not exist
	I1117 11:55:20.804760    3706 out.go:176] * docker "functional-20211117115319-2067" container is missing, will recreate.
	I1117 11:55:20.804808    3706 delete.go:124] DEMOLISHING functional-20211117115319-2067 ...
	I1117 11:55:20.805065    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:20.905445    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:55:20.905481    3706 stop.go:75] unable to get state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:20.905500    3706 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:20.905920    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:21.003716    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:21.003752    3706 delete.go:82] Unable to get host status for functional-20211117115319-2067, assuming it has already been deleted: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:21.003842    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
	W1117 11:55:21.101366    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:21.101385    3706 kic.go:360] could not find the container functional-20211117115319-2067 to remove it. will try anyways
	I1117 11:55:21.101470    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:21.201849    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:55:21.201884    3706 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:21.201961    3706 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0"
	W1117 11:55:21.302354    3706 cli_runner.go:162] docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 11:55:21.302374    3706 oci.go:656] error shutdown functional-20211117115319-2067: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:22.312747    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:22.412989    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:22.413026    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:22.413031    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:22.413067    3706 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:22.972506    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:23.072109    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:23.072141    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:23.072147    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:23.072165    3706 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:24.153345    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:24.277946    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:24.277986    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:24.277992    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:24.278014    3706 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:25.597484    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:25.708610    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:25.708644    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:25.708659    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:25.708678    3706 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:27.297321    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:27.397302    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:27.397334    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:27.397341    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:27.397360    3706 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:29.747487    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:29.851901    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:29.872110    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:29.872123    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:29.872164    3706 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:34.381012    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:34.480956    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:34.480995    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:34.481002    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:34.481024    3706 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:37.712883    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:37.811315    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:37.811353    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:37.811361    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:37.811384    3706 oci.go:87] couldn't shut down functional-20211117115319-2067 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	 
	I1117 11:55:37.811465    3706 cli_runner.go:115] Run: docker rm -f -v functional-20211117115319-2067
	I1117 11:55:37.907321    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
	W1117 11:55:38.002350    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:38.002456    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:55:38.100202    3706 cli_runner.go:115] Run: docker network rm functional-20211117115319-2067
	I1117 11:55:40.882987    3706 cli_runner.go:168] Completed: docker network rm functional-20211117115319-2067: (2.782710918s)
	W1117 11:55:40.883273    3706 delete.go:139] delete failed (probably ok) <nil>
	I1117 11:55:40.883277    3706 fix.go:120] Sleeping 1 second for extra luck!
	I1117 11:55:41.884786    3706 start.go:126] createHost starting for "" (driver="docker")
	I1117 11:55:41.912047    3706 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 11:55:41.912202    3706 start.go:160] libmachine.API.Create for "functional-20211117115319-2067" (driver="docker")
	I1117 11:55:41.912270    3706 client.go:168] LocalClient.Create starting
	I1117 11:55:41.912457    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 11:55:41.912534    3706 main.go:130] libmachine: Decoding PEM data...
	I1117 11:55:41.912562    3706 main.go:130] libmachine: Parsing certificate...
	I1117 11:55:41.912688    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 11:55:41.912737    3706 main.go:130] libmachine: Decoding PEM data...
	I1117 11:55:41.912756    3706 main.go:130] libmachine: Parsing certificate...
	I1117 11:55:41.913689    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 11:55:42.010063    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 11:55:42.010152    3706 network_create.go:254] running [docker network inspect functional-20211117115319-2067] to gather additional debugging logs...
	I1117 11:55:42.010175    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067
	W1117 11:55:42.105405    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:42.105426    3706 network_create.go:257] error running [docker network inspect functional-20211117115319-2067]: docker network inspect functional-20211117115319-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20211117115319-2067
	I1117 11:55:42.105436    3706 network_create.go:259] output of [docker network inspect functional-20211117115319-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20211117115319-2067
	
	** /stderr **
	I1117 11:55:42.105531    3706 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:55:42.200488    3706 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00071a310] misses:0}
	I1117 11:55:42.200518    3706 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:55:42.200531    3706 network_create.go:106] attempt to create docker network functional-20211117115319-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 11:55:42.200604    3706 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067
	I1117 11:55:46.185110    3706 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067: (3.984499392s)
	I1117 11:55:46.185130    3706 network_create.go:90] docker network functional-20211117115319-2067 192.168.49.0/24 created
	I1117 11:55:46.185148    3706 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117115319-2067" container
	I1117 11:55:46.185265    3706 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 11:55:46.282620    3706 cli_runner.go:115] Run: docker volume create functional-20211117115319-2067 --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 11:55:46.378054    3706 oci.go:102] Successfully created a docker volume functional-20211117115319-2067
	I1117 11:55:46.378155    3706 cli_runner.go:115] Run: docker run --rm --name functional-20211117115319-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --entrypoint /usr/bin/test -v functional-20211117115319-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 11:55:46.796232    3706 oci.go:106] Successfully prepared a docker volume functional-20211117115319-2067
	E1117 11:55:46.796291    3706 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 11:55:46.796301    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:55:46.796315    3706 client.go:171] LocalClient.Create took 4.884076362s
	I1117 11:55:46.796326    3706 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 11:55:46.796444    3706 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 11:55:48.796627    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:55:48.796717    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:48.909401    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:48.909502    3706 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:49.063717    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:49.177416    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:49.177492    3706 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:49.478086    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:49.590523    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:49.590682    3706 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:50.162430    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:50.273000    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	W1117 11:55:50.273091    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:55:50.273106    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:50.273112    3706 start.go:129] duration metric: createHost completed in 8.388341682s
	I1117 11:55:50.273172    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:55:50.273227    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:50.377034    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:50.377146    3706 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:50.560531    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:50.674299    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:50.674369    3706 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:51.011700    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:51.136220    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:51.136316    3706 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:51.597261    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:55:51.711907    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	W1117 11:55:51.711979    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:55:51.711990    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:51.711995    3706 fix.go:57] fixHost completed within 31.031538761s
	I1117 11:55:51.712002    3706 start.go:80] releasing machines lock for "functional-20211117115319-2067", held for 31.031576443s
	W1117 11:55:51.712016    3706 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 11:55:51.712130    3706 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:55:51.712138    3706 start.go:547] Will try again in 5 seconds ...
	I1117 11:55:53.107805    3706 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.311378231s)
	I1117 11:55:53.107829    3706 kic.go:188] duration metric: took 6.311543 seconds to extract preloaded images to volume
	I1117 11:55:56.717476    3706 start.go:313] acquiring machines lock for functional-20211117115319-2067: {Name:mk4569454e13da3fe88fd1d74c9c9e521ae0a801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:55:56.717629    3706 start.go:317] acquired machines lock for "functional-20211117115319-2067" in 131.209µs
	I1117 11:55:56.717663    3706 start.go:93] Skipping create...Using existing machine configuration
	I1117 11:55:56.717684    3706 fix.go:55] fixHost starting: 
	I1117 11:55:56.718135    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:56.817438    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:56.817470    3706 fix.go:108] recreateIfNeeded on functional-20211117115319-2067: state= err=unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:56.817480    3706 fix.go:113] machineExists: false. err=machine does not exist
	I1117 11:55:56.864801    3706 out.go:176] * docker "functional-20211117115319-2067" container is missing, will recreate.
	I1117 11:55:56.864826    3706 delete.go:124] DEMOLISHING functional-20211117115319-2067 ...
	I1117 11:55:56.865047    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:56.962051    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:55:56.962084    3706 stop.go:75] unable to get state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:56.962100    3706 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:56.962492    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:57.059608    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:57.059649    3706 delete.go:82] Unable to get host status for functional-20211117115319-2067, assuming it has already been deleted: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:57.059732    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
	W1117 11:55:57.157528    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
	I1117 11:55:57.157551    3706 kic.go:360] could not find the container functional-20211117115319-2067 to remove it. will try anyways
	I1117 11:55:57.157646    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:57.255986    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:55:57.256020    3706 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:57.256103    3706 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0"
	W1117 11:55:57.354076    3706 cli_runner.go:162] docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 11:55:57.354093    3706 oci.go:656] error shutdown functional-20211117115319-2067: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:58.354519    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:58.455357    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:58.455398    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:58.455414    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:58.455435    3706 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:58.847437    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:58.946776    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:58.946829    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:58.946841    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:58.946865    3706 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:59.547414    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:55:59.648173    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:55:59.648207    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:55:59.648212    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:55:59.648232    3706 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:00.982631    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:56:01.082305    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:56:01.082339    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:01.082346    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:56:01.082363    3706 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:02.297348    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:56:02.396850    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:56:02.396893    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:02.396898    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:56:02.396917    3706 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:04.187244    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:56:04.292040    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:56:04.292074    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:04.292081    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:56:04.292100    3706 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:07.562359    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:56:07.663285    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:56:07.663326    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:07.663335    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:56:07.663356    3706 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:13.770070    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:56:13.871522    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:56:13.871554    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:13.871561    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
	I1117 11:56:13.871583    3706 oci.go:87] couldn't shut down functional-20211117115319-2067 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	 
	I1117 11:56:13.871669    3706 cli_runner.go:115] Run: docker rm -f -v functional-20211117115319-2067
	I1117 11:56:13.967740    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
	W1117 11:56:14.064187    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
	I1117 11:56:14.064291    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:56:14.160011    3706 cli_runner.go:115] Run: docker network rm functional-20211117115319-2067
	I1117 11:56:16.950153    3706 cli_runner.go:168] Completed: docker network rm functional-20211117115319-2067: (2.790120534s)
	W1117 11:56:16.950434    3706 delete.go:139] delete failed (probably ok) <nil>
	I1117 11:56:16.950438    3706 fix.go:120] Sleeping 1 second for extra luck!
	I1117 11:56:17.950540    3706 start.go:126] createHost starting for "" (driver="docker")
	I1117 11:56:17.978016    3706 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 11:56:17.978211    3706 start.go:160] libmachine.API.Create for "functional-20211117115319-2067" (driver="docker")
	I1117 11:56:17.978236    3706 client.go:168] LocalClient.Create starting
	I1117 11:56:17.978451    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 11:56:17.978561    3706 main.go:130] libmachine: Decoding PEM data...
	I1117 11:56:17.978593    3706 main.go:130] libmachine: Parsing certificate...
	I1117 11:56:17.978658    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 11:56:17.978703    3706 main.go:130] libmachine: Decoding PEM data...
	I1117 11:56:17.978723    3706 main.go:130] libmachine: Parsing certificate...
	I1117 11:56:17.979682    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 11:56:18.077257    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 11:56:18.077348    3706 network_create.go:254] running [docker network inspect functional-20211117115319-2067] to gather additional debugging logs...
	I1117 11:56:18.077365    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067
	W1117 11:56:18.172435    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 returned with exit code 1
	I1117 11:56:18.172453    3706 network_create.go:257] error running [docker network inspect functional-20211117115319-2067]: docker network inspect functional-20211117115319-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20211117115319-2067
	I1117 11:56:18.172465    3706 network_create.go:259] output of [docker network inspect functional-20211117115319-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20211117115319-2067
	
	** /stderr **
	I1117 11:56:18.172559    3706 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:56:18.267756    3706 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00071a310] amended:false}} dirty:map[] misses:0}
	I1117 11:56:18.267811    3706 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:56:18.268021    3706 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00071a310] amended:true}} dirty:map[192.168.49.0:0xc00071a310 192.168.58.0:0xc00063a0f0] misses:0}
	I1117 11:56:18.268031    3706 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:56:18.268036    3706 network_create.go:106] attempt to create docker network functional-20211117115319-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 11:56:18.268137    3706 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067
	I1117 11:56:22.106891    3706 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067: (3.838725878s)
	I1117 11:56:22.106911    3706 network_create.go:90] docker network functional-20211117115319-2067 192.168.58.0/24 created
	I1117 11:56:22.106927    3706 kic.go:106] calculated static IP "192.168.58.2" for the "functional-20211117115319-2067" container
	I1117 11:56:22.107047    3706 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 11:56:22.204339    3706 cli_runner.go:115] Run: docker volume create functional-20211117115319-2067 --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 11:56:22.301804    3706 oci.go:102] Successfully created a docker volume functional-20211117115319-2067
	I1117 11:56:22.301927    3706 cli_runner.go:115] Run: docker run --rm --name functional-20211117115319-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --entrypoint /usr/bin/test -v functional-20211117115319-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 11:56:22.715778    3706 oci.go:106] Successfully prepared a docker volume functional-20211117115319-2067
	E1117 11:56:22.715833    3706 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 11:56:22.715842    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 11:56:22.715847    3706 client.go:171] LocalClient.Create took 4.73764318s
	I1117 11:56:22.715859    3706 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 11:56:22.715960    3706 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 11:56:24.724250    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:56:24.724347    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:56:24.877859    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:56:24.878002    3706 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:25.076499    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:56:25.200444    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:56:25.200517    3706 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:25.509066    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:56:25.620085    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:56:25.620162    3706 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:26.325480    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:56:26.443135    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	W1117 11:56:26.443249    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:56:26.443266    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:26.443274    3706 start.go:129] duration metric: createHost completed in 8.492788593s
	I1117 11:56:26.443341    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:56:26.443410    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:56:26.559381    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:56:26.559468    3706 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:26.910872    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:56:27.007651    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:56:27.007721    3706 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:27.465818    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:56:27.588293    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	I1117 11:56:27.588495    3706 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:28.167122    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
	W1117 11:56:28.278184    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
	W1117 11:56:28.278278    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:56:28.278291    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	I1117 11:56:28.278301    3706 fix.go:57] fixHost completed within 31.560869884s
	I1117 11:56:28.278307    3706 start.go:80] releasing machines lock for "functional-20211117115319-2067", held for 31.560906534s
	W1117 11:56:28.278452    3706 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117115319-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:56:28.342239    3706 out.go:176] 
	W1117 11:56:28.342422    3706 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 11:56:28.342441    3706 out.go:241] * 
	W1117 11:56:28.343545    3706 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1175: out/minikube-darwin-amd64 -p functional-20211117115319-2067 logs failed: exit status 80
functional_test.go:1165: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command |                           Args                           |               Profile               |  User   | Version |          Start Time           |           End Time            |
|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete  | --all                                                    | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:41 PST | Wed, 17 Nov 2021 11:50:42 PST |
| delete  | -p                                                       | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:42 PST | Wed, 17 Nov 2021 11:50:43 PST |
|         | download-only-20211117115004-2067                        |                                     |         |         |                               |                               |
| delete  | -p                                                       | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:43 PST | Wed, 17 Nov 2021 11:50:43 PST |
|         | download-only-20211117115004-2067                        |                                     |         |         |                               |                               |
| delete  | -p                                                       | download-docker-20211117115043-2067 | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:51 PST | Wed, 17 Nov 2021 11:50:52 PST |
|         | download-docker-20211117115043-2067                      |                                     |         |         |                               |                               |
| delete  | -p addons-20211117115052-2067                            | addons-20211117115052-2067          | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:51:38 PST | Wed, 17 Nov 2021 11:51:42 PST |
| delete  | -p nospam-20211117115142-2067                            | nospam-20211117115142-2067          | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:53:15 PST | Wed, 17 Nov 2021 11:53:19 PST |
| -p      | functional-20211117115319-2067 cache add                 | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:15 PST | Wed, 17 Nov 2021 11:55:16 PST |
|         | minikube-local-cache-test:functional-20211117115319-2067 |                                     |         |         |                               |                               |
| -p      | functional-20211117115319-2067 cache delete              | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:16 PST | Wed, 17 Nov 2021 11:55:16 PST |
|         | minikube-local-cache-test:functional-20211117115319-2067 |                                     |         |         |                               |                               |
| cache   | list                                                     | minikube                            | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:17 PST | Wed, 17 Nov 2021 11:55:17 PST |
| -p      | functional-20211117115319-2067                           | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:17 PST | Wed, 17 Nov 2021 11:55:17 PST |
|         | cache reload                                             |                                     |         |         |                               |                               |
|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2021/11/17 11:55:19
Running on machine: administrators-Mac-mini
Binary: Built with gc go1.17.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1117 11:55:19.875068    3706 out.go:297] Setting OutFile to fd 1 ...
I1117 11:55:19.875200    3706 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 11:55:19.875202    3706 out.go:310] Setting ErrFile to fd 2...
I1117 11:55:19.875204    3706 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 11:55:19.875285    3706 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
I1117 11:55:19.875547    3706 out.go:304] Setting JSON to false
I1117 11:55:19.899880    3706 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1494,"bootTime":1637177425,"procs":316,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W1117 11:55:19.899966    3706 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1117 11:55:19.927427    3706 out.go:176] * [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
I1117 11:55:19.927639    3706 notify.go:174] Checking for updates...
I1117 11:55:19.953781    3706 out.go:176]   - MINIKUBE_LOCATION=12739
I1117 11:55:19.979647    3706 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
I1117 11:55:20.005754    3706 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
I1117 11:55:20.031468    3706 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
I1117 11:55:20.031825    3706 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 11:55:20.031858    3706 driver.go:343] Setting default libvirt URI to qemu:///system
I1117 11:55:20.122494    3706 docker.go:132] docker version: linux-20.10.5
I1117 11:55:20.122631    3706 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 11:55:20.269773    3706 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 19:55:20.219089817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I1117 11:55:20.318504    3706 out.go:176] * Using the docker driver based on existing profile
I1117 11:55:20.318594    3706 start.go:280] selected driver: docker
I1117 11:55:20.318602    3706 start.go:775] validating driver "docker" against &{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1117 11:55:20.318692    3706 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1117 11:55:20.319077    3706 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 11:55:20.466790    3706 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 19:55:20.41694856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I1117 11:55:20.468914    3706 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1117 11:55:20.468942    3706 cni.go:93] Creating CNI manager for ""
I1117 11:55:20.468948    3706 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1117 11:55:20.468960    3706 start_flags.go:282] config:
{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1117 11:55:20.496142    3706 out.go:176] * Starting control plane node functional-20211117115319-2067 in cluster functional-20211117115319-2067
I1117 11:55:20.496233    3706 cache.go:118] Beginning downloading kic base image for docker with docker
I1117 11:55:20.569490    3706 out.go:176] * Pulling base image ...
I1117 11:55:20.569647    3706 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I1117 11:55:20.569648    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 11:55:20.569730    3706 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
I1117 11:55:20.569753    3706 cache.go:57] Caching tarball of preloaded images
I1117 11:55:20.570572    3706 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1117 11:55:20.570774    3706 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
I1117 11:55:20.571294    3706 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/functional-20211117115319-2067/config.json ...
I1117 11:55:20.680423    3706 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I1117 11:55:20.680436    3706 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I1117 11:55:20.680448    3706 cache.go:206] Successfully downloaded all kic artifacts
I1117 11:55:20.680576    3706 start.go:313] acquiring machines lock for functional-20211117115319-2067: {Name:mk4569454e13da3fe88fd1d74c9c9e521ae0a801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 11:55:20.680652    3706 start.go:317] acquired machines lock for "functional-20211117115319-2067" in 60.479µs
I1117 11:55:20.680683    3706 start.go:93] Skipping create...Using existing machine configuration
I1117 11:55:20.680691    3706 fix.go:55] fixHost starting: 
I1117 11:55:20.680949    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:20.777934    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:20.777993    3706 fix.go:108] recreateIfNeeded on functional-20211117115319-2067: state= err=unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:20.778015    3706 fix.go:113] machineExists: false. err=machine does not exist
I1117 11:55:20.804760    3706 out.go:176] * docker "functional-20211117115319-2067" container is missing, will recreate.
I1117 11:55:20.804808    3706 delete.go:124] DEMOLISHING functional-20211117115319-2067 ...
I1117 11:55:20.805065    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:20.905445    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
W1117 11:55:20.905481    3706 stop.go:75] unable to get state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:20.905500    3706 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:20.905920    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:21.003716    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:21.003752    3706 delete.go:82] Unable to get host status for functional-20211117115319-2067, assuming it has already been deleted: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:21.003842    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
W1117 11:55:21.101366    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
I1117 11:55:21.101385    3706 kic.go:360] could not find the container functional-20211117115319-2067 to remove it. will try anyways
I1117 11:55:21.101470    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:21.201849    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
W1117 11:55:21.201884    3706 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:21.201961    3706 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0"
W1117 11:55:21.302354    3706 cli_runner.go:162] docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 11:55:21.302374    3706 oci.go:656] error shutdown functional-20211117115319-2067: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:22.312747    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:22.412989    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:22.413026    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:22.413031    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:22.413067    3706 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:22.972506    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:23.072109    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:23.072141    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:23.072147    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:23.072165    3706 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:24.153345    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:24.277946    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:24.277986    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:24.277992    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:24.278014    3706 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:25.597484    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:25.708610    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:25.708644    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:25.708659    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:25.708678    3706 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:27.297321    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:27.397302    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:27.397334    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:27.397341    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:27.397360    3706 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:29.747487    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:29.851901    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:29.872110    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:29.872123    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:29.872164    3706 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:34.381012    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:34.480956    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:34.480995    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:34.481002    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:34.481024    3706 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:37.712883    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:37.811315    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:37.811353    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:37.811361    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:37.811384    3706 oci.go:87] couldn't shut down functional-20211117115319-2067 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
I1117 11:55:37.811465    3706 cli_runner.go:115] Run: docker rm -f -v functional-20211117115319-2067
I1117 11:55:37.907321    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
W1117 11:55:38.002350    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
I1117 11:55:38.002456    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 11:55:38.100202    3706 cli_runner.go:115] Run: docker network rm functional-20211117115319-2067
I1117 11:55:40.882987    3706 cli_runner.go:168] Completed: docker network rm functional-20211117115319-2067: (2.782710918s)
W1117 11:55:40.883273    3706 delete.go:139] delete failed (probably ok) <nil>
I1117 11:55:40.883277    3706 fix.go:120] Sleeping 1 second for extra luck!
I1117 11:55:41.884786    3706 start.go:126] createHost starting for "" (driver="docker")
I1117 11:55:41.912047    3706 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 11:55:41.912202    3706 start.go:160] libmachine.API.Create for "functional-20211117115319-2067" (driver="docker")
I1117 11:55:41.912270    3706 client.go:168] LocalClient.Create starting
I1117 11:55:41.912457    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
I1117 11:55:41.912534    3706 main.go:130] libmachine: Decoding PEM data...
I1117 11:55:41.912562    3706 main.go:130] libmachine: Parsing certificate...
I1117 11:55:41.912688    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
I1117 11:55:41.912737    3706 main.go:130] libmachine: Decoding PEM data...
I1117 11:55:41.912756    3706 main.go:130] libmachine: Parsing certificate...
I1117 11:55:41.913689    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1117 11:55:42.010063    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1117 11:55:42.010152    3706 network_create.go:254] running [docker network inspect functional-20211117115319-2067] to gather additional debugging logs...
I1117 11:55:42.010175    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067
W1117 11:55:42.105405    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 returned with exit code 1
I1117 11:55:42.105426    3706 network_create.go:257] error running [docker network inspect functional-20211117115319-2067]: docker network inspect functional-20211117115319-2067: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20211117115319-2067
I1117 11:55:42.105436    3706 network_create.go:259] output of [docker network inspect functional-20211117115319-2067]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20211117115319-2067

                                                
                                                
** /stderr **
I1117 11:55:42.105531    3706 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 11:55:42.200488    3706 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00071a310] misses:0}
I1117 11:55:42.200518    3706 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 11:55:42.200531    3706 network_create.go:106] attempt to create docker network functional-20211117115319-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1117 11:55:42.200604    3706 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067
I1117 11:55:46.185110    3706 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067: (3.984499392s)
I1117 11:55:46.185130    3706 network_create.go:90] docker network functional-20211117115319-2067 192.168.49.0/24 created
I1117 11:55:46.185148    3706 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117115319-2067" container
I1117 11:55:46.185265    3706 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 11:55:46.282620    3706 cli_runner.go:115] Run: docker volume create functional-20211117115319-2067 --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --label created_by.minikube.sigs.k8s.io=true
I1117 11:55:46.378054    3706 oci.go:102] Successfully created a docker volume functional-20211117115319-2067
I1117 11:55:46.378155    3706 cli_runner.go:115] Run: docker run --rm --name functional-20211117115319-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --entrypoint /usr/bin/test -v functional-20211117115319-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 11:55:46.796232    3706 oci.go:106] Successfully prepared a docker volume functional-20211117115319-2067
E1117 11:55:46.796291    3706 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
I1117 11:55:46.796301    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 11:55:46.796315    3706 client.go:171] LocalClient.Create took 4.884076362s
I1117 11:55:46.796326    3706 kic.go:179] Starting extracting preloaded images to volume ...
I1117 11:55:46.796444    3706 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 11:55:48.796627    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 11:55:48.796717    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:48.909401    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:48.909502    3706 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:49.063717    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:49.177416    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:49.177492    3706 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:49.478086    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:49.590523    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:49.590682    3706 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:50.162430    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:50.273000    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
W1117 11:55:50.273091    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:55:50.273106    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:50.273112    3706 start.go:129] duration metric: createHost completed in 8.388341682s
I1117 11:55:50.273172    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 11:55:50.273227    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:50.377034    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:50.377146    3706 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:50.560531    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:50.674299    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:50.674369    3706 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:51.011700    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:51.136220    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:51.136316    3706 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:51.597261    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:51.711907    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
W1117 11:55:51.711979    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:55:51.711990    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:51.711995    3706 fix.go:57] fixHost completed within 31.031538761s
I1117 11:55:51.712002    3706 start.go:80] releasing machines lock for "functional-20211117115319-2067", held for 31.031576443s
W1117 11:55:51.712016    3706 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 11:55:51.712130    3706 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 11:55:51.712138    3706 start.go:547] Will try again in 5 seconds ...
I1117 11:55:53.107805    3706 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.311378231s)
I1117 11:55:53.107829    3706 kic.go:188] duration metric: took 6.311543 seconds to extract preloaded images to volume
I1117 11:55:56.717476    3706 start.go:313] acquiring machines lock for functional-20211117115319-2067: {Name:mk4569454e13da3fe88fd1d74c9c9e521ae0a801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 11:55:56.717629    3706 start.go:317] acquired machines lock for "functional-20211117115319-2067" in 131.209µs
I1117 11:55:56.717663    3706 start.go:93] Skipping create...Using existing machine configuration
I1117 11:55:56.717684    3706 fix.go:55] fixHost starting: 
I1117 11:55:56.718135    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:56.817438    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:56.817470    3706 fix.go:108] recreateIfNeeded on functional-20211117115319-2067: state= err=unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:56.817480    3706 fix.go:113] machineExists: false. err=machine does not exist
I1117 11:55:56.864801    3706 out.go:176] * docker "functional-20211117115319-2067" container is missing, will recreate.
I1117 11:55:56.864826    3706 delete.go:124] DEMOLISHING functional-20211117115319-2067 ...
I1117 11:55:56.865047    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:56.962051    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
W1117 11:55:56.962084    3706 stop.go:75] unable to get state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:56.962100    3706 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:56.962492    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:57.059608    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:57.059649    3706 delete.go:82] Unable to get host status for functional-20211117115319-2067, assuming it has already been deleted: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:57.059732    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
W1117 11:55:57.157528    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
I1117 11:55:57.157551    3706 kic.go:360] could not find the container functional-20211117115319-2067 to remove it. will try anyways
I1117 11:55:57.157646    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:57.255986    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
W1117 11:55:57.256020    3706 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:57.256103    3706 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0"
W1117 11:55:57.354076    3706 cli_runner.go:162] docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 11:55:57.354093    3706 oci.go:656] error shutdown functional-20211117115319-2067: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:58.354519    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:58.455357    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:58.455398    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:58.455414    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:58.455435    3706 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:58.847437    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:58.946776    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:58.946829    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:58.946841    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:58.946865    3706 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:59.547414    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:59.648173    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:59.648207    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:59.648212    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:59.648232    3706 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:00.982631    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:01.082305    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:01.082339    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:01.082346    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:01.082363    3706 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:02.297348    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:02.396850    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:02.396893    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:02.396898    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:02.396917    3706 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:04.187244    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:04.292040    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:04.292074    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:04.292081    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:04.292100    3706 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:07.562359    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:07.663285    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:07.663326    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:07.663335    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:07.663356    3706 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:13.770070    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:13.871522    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:13.871554    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:13.871561    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:13.871583    3706 oci.go:87] couldn't shut down functional-20211117115319-2067 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
I1117 11:56:13.871669    3706 cli_runner.go:115] Run: docker rm -f -v functional-20211117115319-2067
I1117 11:56:13.967740    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
W1117 11:56:14.064187    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
I1117 11:56:14.064291    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 11:56:14.160011    3706 cli_runner.go:115] Run: docker network rm functional-20211117115319-2067
I1117 11:56:16.950153    3706 cli_runner.go:168] Completed: docker network rm functional-20211117115319-2067: (2.790120534s)
W1117 11:56:16.950434    3706 delete.go:139] delete failed (probably ok) <nil>
I1117 11:56:16.950438    3706 fix.go:120] Sleeping 1 second for extra luck!
I1117 11:56:17.950540    3706 start.go:126] createHost starting for "" (driver="docker")
I1117 11:56:17.978016    3706 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 11:56:17.978211    3706 start.go:160] libmachine.API.Create for "functional-20211117115319-2067" (driver="docker")
I1117 11:56:17.978236    3706 client.go:168] LocalClient.Create starting
I1117 11:56:17.978451    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
I1117 11:56:17.978561    3706 main.go:130] libmachine: Decoding PEM data...
I1117 11:56:17.978593    3706 main.go:130] libmachine: Parsing certificate...
I1117 11:56:17.978658    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
I1117 11:56:17.978703    3706 main.go:130] libmachine: Decoding PEM data...
I1117 11:56:17.978723    3706 main.go:130] libmachine: Parsing certificate...
I1117 11:56:17.979682    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1117 11:56:18.077257    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1117 11:56:18.077348    3706 network_create.go:254] running [docker network inspect functional-20211117115319-2067] to gather additional debugging logs...
I1117 11:56:18.077365    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067
W1117 11:56:18.172435    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 returned with exit code 1
I1117 11:56:18.172453    3706 network_create.go:257] error running [docker network inspect functional-20211117115319-2067]: docker network inspect functional-20211117115319-2067: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20211117115319-2067
I1117 11:56:18.172465    3706 network_create.go:259] output of [docker network inspect functional-20211117115319-2067]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20211117115319-2067

                                                
                                                
** /stderr **
I1117 11:56:18.172559    3706 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 11:56:18.267756    3706 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00071a310] amended:false}} dirty:map[] misses:0}
I1117 11:56:18.267811    3706 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 11:56:18.268021    3706 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00071a310] amended:true}} dirty:map[192.168.49.0:0xc00071a310 192.168.58.0:0xc00063a0f0] misses:0}
I1117 11:56:18.268031    3706 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 11:56:18.268036    3706 network_create.go:106] attempt to create docker network functional-20211117115319-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1117 11:56:18.268137    3706 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067
I1117 11:56:22.106891    3706 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067: (3.838725878s)
I1117 11:56:22.106911    3706 network_create.go:90] docker network functional-20211117115319-2067 192.168.58.0/24 created
I1117 11:56:22.106927    3706 kic.go:106] calculated static IP "192.168.58.2" for the "functional-20211117115319-2067" container
I1117 11:56:22.107047    3706 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 11:56:22.204339    3706 cli_runner.go:115] Run: docker volume create functional-20211117115319-2067 --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --label created_by.minikube.sigs.k8s.io=true
I1117 11:56:22.301804    3706 oci.go:102] Successfully created a docker volume functional-20211117115319-2067
I1117 11:56:22.301927    3706 cli_runner.go:115] Run: docker run --rm --name functional-20211117115319-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --entrypoint /usr/bin/test -v functional-20211117115319-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 11:56:22.715778    3706 oci.go:106] Successfully prepared a docker volume functional-20211117115319-2067
E1117 11:56:22.715833    3706 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
I1117 11:56:22.715842    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 11:56:22.715847    3706 client.go:171] LocalClient.Create took 4.73764318s
I1117 11:56:22.715859    3706 kic.go:179] Starting extracting preloaded images to volume ...
I1117 11:56:22.715960    3706 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 11:56:24.724250    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 11:56:24.724347    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:24.877859    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:24.878002    3706 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:25.076499    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:25.200444    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:25.200517    3706 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:25.509066    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:25.620085    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:25.620162    3706 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:26.325480    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:26.443135    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
W1117 11:56:26.443249    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:56:26.443266    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:26.443274    3706 start.go:129] duration metric: createHost completed in 8.492788593s
I1117 11:56:26.443341    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 11:56:26.443410    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:26.559381    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:26.559468    3706 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:26.910872    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:27.007651    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:27.007721    3706 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:27.465818    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:27.588293    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:27.588495    3706 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:28.167122    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:28.278184    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
W1117 11:56:28.278278    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:56:28.278291    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:28.278301    3706 fix.go:57] fixHost completed within 31.560869884s
I1117 11:56:28.278307    3706 start.go:80] releasing machines lock for "functional-20211117115319-2067", held for 31.560906534s
W1117 11:56:28.278452    3706 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117115319-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 11:56:28.342239    3706 out.go:176] 
W1117 11:56:28.342422    3706 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 11:56:28.342441    3706 out.go:241] * 
W1117 11:56:28.343545    3706 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
* 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (0.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1190: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/functional-20211117115319-20673074141232/logs.txt
functional_test.go:1190: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/functional-20211117115319-20673074141232/logs.txt: exit status 80 (396.668163ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1192: out/minikube-darwin-amd64 -p functional-20211117115319-2067 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/functional-20211117115319-20673074141232/logs.txt failed: exit status 80
functional_test.go:1195: expected empty minikube logs output, but got: 
***
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr *****
functional_test.go:1165: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command |                           Args                           |               Profile               |  User   | Version |          Start Time           |           End Time            |
|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete  | --all                                                    | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:41 PST | Wed, 17 Nov 2021 11:50:42 PST |
| delete  | -p                                                       | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:42 PST | Wed, 17 Nov 2021 11:50:43 PST |
|         | download-only-20211117115004-2067                        |                                     |         |         |                               |                               |
| delete  | -p                                                       | download-only-20211117115004-2067   | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:43 PST | Wed, 17 Nov 2021 11:50:43 PST |
|         | download-only-20211117115004-2067                        |                                     |         |         |                               |                               |
| delete  | -p                                                       | download-docker-20211117115043-2067 | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:50:51 PST | Wed, 17 Nov 2021 11:50:52 PST |
|         | download-docker-20211117115043-2067                      |                                     |         |         |                               |                               |
| delete  | -p addons-20211117115052-2067                            | addons-20211117115052-2067          | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:51:38 PST | Wed, 17 Nov 2021 11:51:42 PST |
| delete  | -p nospam-20211117115142-2067                            | nospam-20211117115142-2067          | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:53:15 PST | Wed, 17 Nov 2021 11:53:19 PST |
| -p      | functional-20211117115319-2067 cache add                 | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:15 PST | Wed, 17 Nov 2021 11:55:16 PST |
|         | minikube-local-cache-test:functional-20211117115319-2067 |                                     |         |         |                               |                               |
| -p      | functional-20211117115319-2067 cache delete              | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:16 PST | Wed, 17 Nov 2021 11:55:16 PST |
|         | minikube-local-cache-test:functional-20211117115319-2067 |                                     |         |         |                               |                               |
| cache   | list                                                     | minikube                            | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:17 PST | Wed, 17 Nov 2021 11:55:17 PST |
| -p      | functional-20211117115319-2067                           | functional-20211117115319-2067      | jenkins | v1.24.0 | Wed, 17 Nov 2021 11:55:17 PST | Wed, 17 Nov 2021 11:55:17 PST |
|         | cache reload                                             |                                     |         |         |                               |                               |
|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2021/11/17 11:55:19
Running on machine: administrators-Mac-mini
Binary: Built with gc go1.17.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1117 11:55:19.875068    3706 out.go:297] Setting OutFile to fd 1 ...
I1117 11:55:19.875200    3706 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 11:55:19.875202    3706 out.go:310] Setting ErrFile to fd 2...
I1117 11:55:19.875204    3706 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 11:55:19.875285    3706 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
I1117 11:55:19.875547    3706 out.go:304] Setting JSON to false
I1117 11:55:19.899880    3706 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1494,"bootTime":1637177425,"procs":316,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W1117 11:55:19.899966    3706 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1117 11:55:19.927427    3706 out.go:176] * [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
I1117 11:55:19.927639    3706 notify.go:174] Checking for updates...
I1117 11:55:19.953781    3706 out.go:176]   - MINIKUBE_LOCATION=12739
I1117 11:55:19.979647    3706 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
I1117 11:55:20.005754    3706 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
I1117 11:55:20.031468    3706 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
I1117 11:55:20.031825    3706 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 11:55:20.031858    3706 driver.go:343] Setting default libvirt URI to qemu:///system
I1117 11:55:20.122494    3706 docker.go:132] docker version: linux-20.10.5
I1117 11:55:20.122631    3706 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 11:55:20.269773    3706 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 19:55:20.219089817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I1117 11:55:20.318504    3706 out.go:176] * Using the docker driver based on existing profile
I1117 11:55:20.318594    3706 start.go:280] selected driver: docker
I1117 11:55:20.318602    3706 start.go:775] validating driver "docker" against &{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1117 11:55:20.318692    3706 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1117 11:55:20.319077    3706 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 11:55:20.466790    3706 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 19:55:20.41694856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I1117 11:55:20.468914    3706 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1117 11:55:20.468942    3706 cni.go:93] Creating CNI manager for ""
I1117 11:55:20.468948    3706 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1117 11:55:20.468960    3706 start_flags.go:282] config:
{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1117 11:55:20.496142    3706 out.go:176] * Starting control plane node functional-20211117115319-2067 in cluster functional-20211117115319-2067
I1117 11:55:20.496233    3706 cache.go:118] Beginning downloading kic base image for docker with docker
I1117 11:55:20.569490    3706 out.go:176] * Pulling base image ...
I1117 11:55:20.569647    3706 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I1117 11:55:20.569648    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 11:55:20.569730    3706 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
I1117 11:55:20.569753    3706 cache.go:57] Caching tarball of preloaded images
I1117 11:55:20.570572    3706 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1117 11:55:20.570774    3706 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
I1117 11:55:20.571294    3706 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/functional-20211117115319-2067/config.json ...
I1117 11:55:20.680423    3706 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I1117 11:55:20.680436    3706 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I1117 11:55:20.680448    3706 cache.go:206] Successfully downloaded all kic artifacts
I1117 11:55:20.680576    3706 start.go:313] acquiring machines lock for functional-20211117115319-2067: {Name:mk4569454e13da3fe88fd1d74c9c9e521ae0a801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 11:55:20.680652    3706 start.go:317] acquired machines lock for "functional-20211117115319-2067" in 60.479µs
I1117 11:55:20.680683    3706 start.go:93] Skipping create...Using existing machine configuration
I1117 11:55:20.680691    3706 fix.go:55] fixHost starting: 
I1117 11:55:20.680949    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:20.777934    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:20.777993    3706 fix.go:108] recreateIfNeeded on functional-20211117115319-2067: state= err=unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:20.778015    3706 fix.go:113] machineExists: false. err=machine does not exist
I1117 11:55:20.804760    3706 out.go:176] * docker "functional-20211117115319-2067" container is missing, will recreate.
I1117 11:55:20.804808    3706 delete.go:124] DEMOLISHING functional-20211117115319-2067 ...
I1117 11:55:20.805065    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:20.905445    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
W1117 11:55:20.905481    3706 stop.go:75] unable to get state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:20.905500    3706 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:20.905920    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:21.003716    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:21.003752    3706 delete.go:82] Unable to get host status for functional-20211117115319-2067, assuming it has already been deleted: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:21.003842    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
W1117 11:55:21.101366    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
I1117 11:55:21.101385    3706 kic.go:360] could not find the container functional-20211117115319-2067 to remove it. will try anyways
I1117 11:55:21.101470    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:21.201849    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
W1117 11:55:21.201884    3706 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:21.201961    3706 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0"
W1117 11:55:21.302354    3706 cli_runner.go:162] docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 11:55:21.302374    3706 oci.go:656] error shutdown functional-20211117115319-2067: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:22.312747    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:22.412989    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:22.413026    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:22.413031    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:22.413067    3706 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:22.972506    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:23.072109    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:23.072141    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:23.072147    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:23.072165    3706 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:24.153345    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:24.277946    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:24.277986    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:24.277992    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:24.278014    3706 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:25.597484    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:25.708610    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:25.708644    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:25.708659    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:25.708678    3706 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:27.297321    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:27.397302    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:27.397334    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:27.397341    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:27.397360    3706 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:29.747487    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:29.851901    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:29.872110    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:29.872123    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:29.872164    3706 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:34.381012    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:34.480956    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:34.480995    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:34.481002    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:34.481024    3706 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:37.712883    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:37.811315    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:37.811353    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:37.811361    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:37.811384    3706 oci.go:87] couldn't shut down functional-20211117115319-2067 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
I1117 11:55:37.811465    3706 cli_runner.go:115] Run: docker rm -f -v functional-20211117115319-2067
I1117 11:55:37.907321    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
W1117 11:55:38.002350    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
I1117 11:55:38.002456    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 11:55:38.100202    3706 cli_runner.go:115] Run: docker network rm functional-20211117115319-2067
I1117 11:55:40.882987    3706 cli_runner.go:168] Completed: docker network rm functional-20211117115319-2067: (2.782710918s)
W1117 11:55:40.883273    3706 delete.go:139] delete failed (probably ok) <nil>
I1117 11:55:40.883277    3706 fix.go:120] Sleeping 1 second for extra luck!
I1117 11:55:41.884786    3706 start.go:126] createHost starting for "" (driver="docker")
I1117 11:55:41.912047    3706 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 11:55:41.912202    3706 start.go:160] libmachine.API.Create for "functional-20211117115319-2067" (driver="docker")
I1117 11:55:41.912270    3706 client.go:168] LocalClient.Create starting
I1117 11:55:41.912457    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
I1117 11:55:41.912534    3706 main.go:130] libmachine: Decoding PEM data...
I1117 11:55:41.912562    3706 main.go:130] libmachine: Parsing certificate...
I1117 11:55:41.912688    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
I1117 11:55:41.912737    3706 main.go:130] libmachine: Decoding PEM data...
I1117 11:55:41.912756    3706 main.go:130] libmachine: Parsing certificate...
I1117 11:55:41.913689    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1117 11:55:42.010063    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1117 11:55:42.010152    3706 network_create.go:254] running [docker network inspect functional-20211117115319-2067] to gather additional debugging logs...
I1117 11:55:42.010175    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067
W1117 11:55:42.105405    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 returned with exit code 1
I1117 11:55:42.105426    3706 network_create.go:257] error running [docker network inspect functional-20211117115319-2067]: docker network inspect functional-20211117115319-2067: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20211117115319-2067
I1117 11:55:42.105436    3706 network_create.go:259] output of [docker network inspect functional-20211117115319-2067]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20211117115319-2067

                                                
                                                
** /stderr **
I1117 11:55:42.105531    3706 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 11:55:42.200488    3706 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00071a310] misses:0}
I1117 11:55:42.200518    3706 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 11:55:42.200531    3706 network_create.go:106] attempt to create docker network functional-20211117115319-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1117 11:55:42.200604    3706 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067
I1117 11:55:46.185110    3706 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067: (3.984499392s)
I1117 11:55:46.185130    3706 network_create.go:90] docker network functional-20211117115319-2067 192.168.49.0/24 created
I1117 11:55:46.185148    3706 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117115319-2067" container
I1117 11:55:46.185265    3706 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 11:55:46.282620    3706 cli_runner.go:115] Run: docker volume create functional-20211117115319-2067 --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --label created_by.minikube.sigs.k8s.io=true
I1117 11:55:46.378054    3706 oci.go:102] Successfully created a docker volume functional-20211117115319-2067
I1117 11:55:46.378155    3706 cli_runner.go:115] Run: docker run --rm --name functional-20211117115319-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --entrypoint /usr/bin/test -v functional-20211117115319-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 11:55:46.796232    3706 oci.go:106] Successfully prepared a docker volume functional-20211117115319-2067
E1117 11:55:46.796291    3706 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
I1117 11:55:46.796301    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 11:55:46.796315    3706 client.go:171] LocalClient.Create took 4.884076362s
I1117 11:55:46.796326    3706 kic.go:179] Starting extracting preloaded images to volume ...
I1117 11:55:46.796444    3706 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 11:55:48.796627    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 11:55:48.796717    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:48.909401    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:48.909502    3706 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:49.063717    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:49.177416    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:49.177492    3706 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:49.478086    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:49.590523    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:49.590682    3706 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:50.162430    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:50.273000    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
W1117 11:55:50.273091    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:55:50.273106    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:50.273112    3706 start.go:129] duration metric: createHost completed in 8.388341682s
I1117 11:55:50.273172    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 11:55:50.273227    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:50.377034    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:50.377146    3706 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:50.560531    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:50.674299    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:50.674369    3706 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:51.011700    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:51.136220    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:55:51.136316    3706 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:51.597261    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:55:51.711907    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
W1117 11:55:51.711979    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:55:51.711990    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:51.711995    3706 fix.go:57] fixHost completed within 31.031538761s
I1117 11:55:51.712002    3706 start.go:80] releasing machines lock for "functional-20211117115319-2067", held for 31.031576443s
W1117 11:55:51.712016    3706 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 11:55:51.712130    3706 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 11:55:51.712138    3706 start.go:547] Will try again in 5 seconds ...
I1117 11:55:53.107805    3706 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.311378231s)
I1117 11:55:53.107829    3706 kic.go:188] duration metric: took 6.311543 seconds to extract preloaded images to volume
I1117 11:55:56.717476    3706 start.go:313] acquiring machines lock for functional-20211117115319-2067: {Name:mk4569454e13da3fe88fd1d74c9c9e521ae0a801 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 11:55:56.717629    3706 start.go:317] acquired machines lock for "functional-20211117115319-2067" in 131.209µs
I1117 11:55:56.717663    3706 start.go:93] Skipping create...Using existing machine configuration
I1117 11:55:56.717684    3706 fix.go:55] fixHost starting: 
I1117 11:55:56.718135    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:56.817438    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:56.817470    3706 fix.go:108] recreateIfNeeded on functional-20211117115319-2067: state= err=unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:56.817480    3706 fix.go:113] machineExists: false. err=machine does not exist
I1117 11:55:56.864801    3706 out.go:176] * docker "functional-20211117115319-2067" container is missing, will recreate.
I1117 11:55:56.864826    3706 delete.go:124] DEMOLISHING functional-20211117115319-2067 ...
I1117 11:55:56.865047    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:56.962051    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
W1117 11:55:56.962084    3706 stop.go:75] unable to get state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:56.962100    3706 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:56.962492    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:57.059608    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:57.059649    3706 delete.go:82] Unable to get host status for functional-20211117115319-2067, assuming it has already been deleted: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:57.059732    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
W1117 11:55:57.157528    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
I1117 11:55:57.157551    3706 kic.go:360] could not find the container functional-20211117115319-2067 to remove it. will try anyways
I1117 11:55:57.157646    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:57.255986    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
W1117 11:55:57.256020    3706 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:57.256103    3706 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0"
W1117 11:55:57.354076    3706 cli_runner.go:162] docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 11:55:57.354093    3706 oci.go:656] error shutdown functional-20211117115319-2067: docker exec --privileged -t functional-20211117115319-2067 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:58.354519    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:58.455357    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:58.455398    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:58.455414    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:58.455435    3706 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:58.847437    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:58.946776    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:58.946829    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:58.946841    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:58.946865    3706 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:59.547414    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:55:59.648173    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:55:59.648207    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:55:59.648212    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:55:59.648232    3706 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:00.982631    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:01.082305    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:01.082339    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:01.082346    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:01.082363    3706 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:02.297348    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:02.396850    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:02.396893    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:02.396898    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:02.396917    3706 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:04.187244    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:04.292040    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:04.292074    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:04.292081    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:04.292100    3706 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:07.562359    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:07.663285    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:07.663326    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:07.663335    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:07.663356    3706 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:13.770070    3706 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:56:13.871522    3706 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:56:13.871554    3706 oci.go:668] temporary error verifying shutdown: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:13.871561    3706 oci.go:670] temporary error: container functional-20211117115319-2067 status is  but expect it to be exited
I1117 11:56:13.871583    3706 oci.go:87] couldn't shut down functional-20211117115319-2067 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
I1117 11:56:13.871669    3706 cli_runner.go:115] Run: docker rm -f -v functional-20211117115319-2067
I1117 11:56:13.967740    3706 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117115319-2067
W1117 11:56:14.064187    3706 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117115319-2067 returned with exit code 1
I1117 11:56:14.064291    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 11:56:14.160011    3706 cli_runner.go:115] Run: docker network rm functional-20211117115319-2067
I1117 11:56:16.950153    3706 cli_runner.go:168] Completed: docker network rm functional-20211117115319-2067: (2.790120534s)
W1117 11:56:16.950434    3706 delete.go:139] delete failed (probably ok) <nil>
I1117 11:56:16.950438    3706 fix.go:120] Sleeping 1 second for extra luck!
I1117 11:56:17.950540    3706 start.go:126] createHost starting for "" (driver="docker")
I1117 11:56:17.978016    3706 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 11:56:17.978211    3706 start.go:160] libmachine.API.Create for "functional-20211117115319-2067" (driver="docker")
I1117 11:56:17.978236    3706 client.go:168] LocalClient.Create starting
I1117 11:56:17.978451    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
I1117 11:56:17.978561    3706 main.go:130] libmachine: Decoding PEM data...
I1117 11:56:17.978593    3706 main.go:130] libmachine: Parsing certificate...
I1117 11:56:17.978658    3706 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
I1117 11:56:17.978703    3706 main.go:130] libmachine: Decoding PEM data...
I1117 11:56:17.978723    3706 main.go:130] libmachine: Parsing certificate...
I1117 11:56:17.979682    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1117 11:56:18.077257    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1117 11:56:18.077348    3706 network_create.go:254] running [docker network inspect functional-20211117115319-2067] to gather additional debugging logs...
I1117 11:56:18.077365    3706 cli_runner.go:115] Run: docker network inspect functional-20211117115319-2067
W1117 11:56:18.172435    3706 cli_runner.go:162] docker network inspect functional-20211117115319-2067 returned with exit code 1
I1117 11:56:18.172453    3706 network_create.go:257] error running [docker network inspect functional-20211117115319-2067]: docker network inspect functional-20211117115319-2067: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20211117115319-2067
I1117 11:56:18.172465    3706 network_create.go:259] output of [docker network inspect functional-20211117115319-2067]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20211117115319-2067

                                                
                                                
** /stderr **
I1117 11:56:18.172559    3706 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 11:56:18.267756    3706 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00071a310] amended:false}} dirty:map[] misses:0}
I1117 11:56:18.267811    3706 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 11:56:18.268021    3706 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00071a310] amended:true}} dirty:map[192.168.49.0:0xc00071a310 192.168.58.0:0xc00063a0f0] misses:0}
I1117 11:56:18.268031    3706 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 11:56:18.268036    3706 network_create.go:106] attempt to create docker network functional-20211117115319-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1117 11:56:18.268137    3706 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067
I1117 11:56:22.106891    3706 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117115319-2067: (3.838725878s)
I1117 11:56:22.106911    3706 network_create.go:90] docker network functional-20211117115319-2067 192.168.58.0/24 created
I1117 11:56:22.106927    3706 kic.go:106] calculated static IP "192.168.58.2" for the "functional-20211117115319-2067" container
I1117 11:56:22.107047    3706 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 11:56:22.204339    3706 cli_runner.go:115] Run: docker volume create functional-20211117115319-2067 --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --label created_by.minikube.sigs.k8s.io=true
I1117 11:56:22.301804    3706 oci.go:102] Successfully created a docker volume functional-20211117115319-2067
I1117 11:56:22.301927    3706 cli_runner.go:115] Run: docker run --rm --name functional-20211117115319-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117115319-2067 --entrypoint /usr/bin/test -v functional-20211117115319-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 11:56:22.715778    3706 oci.go:106] Successfully prepared a docker volume functional-20211117115319-2067
E1117 11:56:22.715833    3706 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
I1117 11:56:22.715842    3706 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 11:56:22.715847    3706 client.go:171] LocalClient.Create took 4.73764318s
I1117 11:56:22.715859    3706 kic.go:179] Starting extracting preloaded images to volume ...
I1117 11:56:22.715960    3706 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117115319-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 11:56:24.724250    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 11:56:24.724347    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:24.877859    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:24.878002    3706 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:25.076499    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:25.200444    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:25.200517    3706 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:25.509066    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:25.620085    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:25.620162    3706 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:26.325480    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:26.443135    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
W1117 11:56:26.443249    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:56:26.443266    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:26.443274    3706 start.go:129] duration metric: createHost completed in 8.492788593s
I1117 11:56:26.443341    3706 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 11:56:26.443410    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:26.559381    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:26.559468    3706 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:26.910872    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:27.007651    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:27.007721    3706 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:27.465818    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:27.588293    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
I1117 11:56:27.588495    3706 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:28.167122    3706 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067
W1117 11:56:28.278184    3706 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067 returned with exit code 1
W1117 11:56:28.278278    3706 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:56:28.278291    3706 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117115319-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117115319-2067: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067
I1117 11:56:28.278301    3706 fix.go:57] fixHost completed within 31.560869884s
I1117 11:56:28.278307    3706 start.go:80] releasing machines lock for "functional-20211117115319-2067", held for 31.560906534s
W1117 11:56:28.278452    3706 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117115319-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 11:56:28.342239    3706 out.go:176] 
W1117 11:56:28.342422    3706 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 11:56:28.342441    3706 out.go:241] * 
W1117 11:56:28.343545    3706 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:847: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117115319-2067 --alsologtostderr -v=1]
functional_test.go:860: output didn't produce a URL
functional_test.go:852: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117115319-2067 --alsologtostderr -v=1] ...
functional_test.go:852: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117115319-2067 --alsologtostderr -v=1] stdout:
functional_test.go:852: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117115319-2067 --alsologtostderr -v=1] stderr:
I1117 11:57:06.028083    4472 out.go:297] Setting OutFile to fd 1 ...
I1117 11:57:06.028320    4472 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 11:57:06.028325    4472 out.go:310] Setting ErrFile to fd 2...
I1117 11:57:06.028328    4472 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 11:57:06.028402    4472 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
I1117 11:57:06.028576    4472 mustload.go:65] Loading cluster: functional-20211117115319-2067
I1117 11:57:06.028810    4472 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 11:57:06.029180    4472 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
W1117 11:57:06.123808    4472 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
I1117 11:57:06.151069    4472 out.go:176] 
W1117 11:57:06.151247    4472 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117115319-2067

                                                
                                                
W1117 11:57:06.151262    4472 out.go:241] * 
* 
W1117 11:57:06.154934    4472 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                              │
│    * If the above advice does not help, please let us know:                                                                  │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                                │
│                                                                                                                              │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                     │
│    * Please also attach the following file to the GitHub issue:                                                              │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log    │
│                                                                                                                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                              │
│    * If the above advice does not help, please let us know:                                                                  │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                                │
│                                                                                                                              │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                     │
│    * Please also attach the following file to the GitHub issue:                                                              │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log    │
│                                                                                                                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1117 11:57:06.176859    4472 out.go:176] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (136.777595ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:57:06.530832    4483 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:796: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 status
functional_test.go:796: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 status: exit status 7 (136.557144ms)

                                                
                                                
-- stdout --
	functional-20211117115319-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:57:04.179409    4411 status.go:258] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	E1117 11:57:04.179417    4411 status.go:261] The "functional-20211117115319-2067" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:798: failed to run minikube status. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 status" : exit status 7
functional_test.go:802: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:802: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (170.275047ms)

                                                
                                                
-- stdout --
	host:Nonexistent,kublet:Nonexistent,apiserver:Nonexistent,kubeconfig:Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:57:04.349897    4416 status.go:258] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	E1117 11:57:04.349906    4416 status.go:261] The "functional-20211117115319-2067" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:804: failed to run minikube status with custom format: args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:814: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 status -o json
functional_test.go:814: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 status -o json: exit status 7 (136.878487ms)

                                                
                                                
-- stdout --
	{"Name":"functional-20211117115319-2067","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:57:04.487063    4421 status.go:258] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	E1117 11:57:04.487070    4421 status.go:261] The "functional-20211117115319-2067" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:816: failed to run minikube status with json output. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (139.308505ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:57:04.724893    4430 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Run:  kubectl --context functional-20211117115319-2067 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1372: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8: exit status 1 (38.231144ms)

                                                
                                                
** stderr ** 
	W1117 11:56:37.161406    4284 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	error: context "functional-20211117115319-2067" does not exist

                                                
                                                
** /stderr **
functional_test.go:1376: failed to create hello-node deployment with this command "kubectl --context functional-20211117115319-2067 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1341: service test failed - dumping debug information
functional_test.go:1342: -----------------------service failure post-mortem--------------------------------
functional_test.go:1345: (dbg) Run:  kubectl --context functional-20211117115319-2067 describe po hello-node
functional_test.go:1345: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 describe po hello-node: exit status 1 (39.193903ms)

                                                
                                                
** stderr ** 
	W1117 11:56:37.203769    4285 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test.go:1347: "kubectl --context functional-20211117115319-2067 describe po hello-node" failed: exit status 1
functional_test.go:1349: hello-node pod describe:
functional_test.go:1351: (dbg) Run:  kubectl --context functional-20211117115319-2067 logs -l app=hello-node
functional_test.go:1351: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 logs -l app=hello-node: exit status 1 (40.290898ms)

                                                
                                                
** stderr ** 
	W1117 11:56:37.244452    4286 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test.go:1353: "kubectl --context functional-20211117115319-2067 logs -l app=hello-node" failed: exit status 1
functional_test.go:1355: hello-node logs:
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20211117115319-2067 describe svc hello-node
functional_test.go:1357: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 describe svc hello-node: exit status 1 (42.14979ms)

                                                
                                                
** stderr ** 
	W1117 11:56:37.286487    4287 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test.go:1359: "kubectl --context functional-20211117115319-2067 describe svc hello-node" failed: exit status 1
functional_test.go:1361: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (139.433588ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:37.534102    4292 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:46: failed waiting for storage-provisioner: client config: context "functional-20211117115319-2067" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (143.792838ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:36.853496    4273 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1517: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1517: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "echo hello": exit status 80 (221.807053ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_d94a149758de690cb366888a5d8e6efc18cafe43_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1522: failed to run an ssh command. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"echo hello\"" : exit status 80
functional_test.go:1526: expected minikube ssh command output to be -"hello"- but got *"\n\n"*. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"echo hello\""
functional_test.go:1534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1534: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "cat /etc/hostname": exit status 80 (221.144669ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_e38561299ab5d398426b8e3871f2ff03f1313dcf_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1540: failed to run an ssh command. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"cat /etc/hostname\"" : exit status 80
functional_test.go:1544: expected minikube ssh command output to be -"functional-20211117115319-2067"- but got *"\n\n"*. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/SSHCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (159.774649ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:35.703257    4233 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 80 (251.296919ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_config_8726cae15f99b94c9f6c9c6f69cb2fb49584395b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 80
helpers_test.go:548: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:548: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /home/docker/cp-test.txt": exit status 80 (233.946192ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_48940019a4d8de2af5e76dec57356a4c5420c0aa_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:553: failed to run an cp command. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo cat /home/docker/cp-test.txt\"" : exit status 80

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:562: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"\n\n",
)
--- FAIL: TestFunctional/parallel/CpCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1571: (dbg) Run:  kubectl --context functional-20211117115319-2067 replace --force -f testdata/mysql.yaml
functional_test.go:1571: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 replace --force -f testdata/mysql.yaml: exit status 1 (41.510715ms)

                                                
                                                
** stderr ** 
	W1117 11:56:33.448124    4154 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	error: context "functional-20211117115319-2067" does not exist

                                                
                                                
** /stderr **
functional_test.go:1573: failed to kubectl replace mysql: args "kubectl --context functional-20211117115319-2067 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (153.464915ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:33.715974    4159 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/MySQL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1707: Checking for existence of /etc/test/nested/copy/2067/hosts within VM
functional_test.go:1709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/test/nested/copy/2067/hosts"
functional_test.go:1709: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/test/nested/copy/2067/hosts": exit status 80 (204.154455ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_0b4f0329f197f00ad10b7887b880e6a43458a8ab_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1711: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/test/nested/copy/2067/hosts" failed: exit status 80
functional_test.go:1714: file sync test content: 

                                                
                                                
functional_test.go:1724: /etc/sync.test content mismatch (-want +got):
string(
- 	"Test file for checking file sync process",
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/FileSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (357.933992ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:33.402914    4149 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/FileSync (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/2067.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/ssl/certs/2067.pem"
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/ssl/certs/2067.pem": exit status 80 (195.093549ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_6d47180de0f1cf87d843037796714427bb3df277_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/etc/ssl/certs/2067.pem" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo cat /etc/ssl/certs/2067.pem\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/2067.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1750: Checking for existence of /usr/share/ca-certificates/2067.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /usr/share/ca-certificates/2067.pem"
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /usr/share/ca-certificates/2067.pem": exit status 80 (195.259453ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_9732054b4d589fcf14cbdeee8def265946d9e5d7_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/usr/share/ca-certificates/2067.pem" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo cat /usr/share/ca-certificates/2067.pem\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/2067.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1750: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 80 (193.820957ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_c1fb1ee25ebb7a3edd1a0da000c23bf1f788dc55_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /etc/ssl/certs/20672.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/ssl/certs/20672.pem"
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/ssl/certs/20672.pem": exit status 80 (191.807806ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_ad3767e541e6d7a3da44755929cd7416a0d50164_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/etc/ssl/certs/20672.pem" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo cat /etc/ssl/certs/20672.pem\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/20672.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /usr/share/ca-certificates/20672.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /usr/share/ca-certificates/20672.pem"
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /usr/share/ca-certificates/20672.pem": exit status 80 (192.731055ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_8d95c0381d411506a048d66469a1a4059c863229_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/usr/share/ca-certificates/20672.pem" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo cat /usr/share/ca-certificates/20672.pem\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/20672.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 80 (228.89773ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/CertSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (144.348166ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:32.709757    4135 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:213: (dbg) Run:  kubectl --context functional-20211117115319-2067 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:213: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (38.041995ms)

                                                
                                                
** stderr ** 
	W1117 11:56:30.565716    4070 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test.go:215: failed to 'kubectl get nodes' with args "kubectl --context functional-20211117115319-2067 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:221: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	W1117 11:56:30.565716    4070 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	W1117 11:56:30.565716    4070 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	W1117 11:56:30.565716    4070 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	W1117 11:56:30.565716    4070 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117115319-2067
helpers_test.go:235: (dbg) docker inspect functional-20211117115319-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117115319-2067",
	        "Id": "cb28c1b2262d70883caf50271130ba3b7ad3bf633522ec184f3dcf93a36856a5",
	        "Created": "2021-11-17T19:56:18.361617018Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117115319-2067 -n functional-20211117115319-2067: exit status 7 (143.919321ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:56:30.818080    4075 status.go:247] status error: host: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117115319-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/NodeLabels (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo systemctl is-active crio": exit status 80 (209.478166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_6b7239aee4f25975002bb6e89d3a731164a5501d_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1808: output of 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_6b7239aee4f25975002bb6e89d3a731164a5501d_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **: exit status 80
functional_test.go:1811: For runtime "docker": expected "crio" to be inactive but got "\n\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 version -o=json --components
functional_test.go:2051: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 version -o=json --components: exit status 80 (196.337878ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_version_4aca586f1e1becae668b759539b2a1d01ad61d4e_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2053: error version: exit status 80
functional_test.go:2058: expected to see "buildctl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "commit" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "containerd" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crictl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crio" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "ctr" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "docker" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "minikubeVersion" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "podman" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "run" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crun" in the minikube version --components but got:

                                                
                                                

                                                
                                                
--- FAIL: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageList (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageList
=== PAUSE TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image ls
functional_test.go:255: expected k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageList (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh pgrep buildkitd
functional_test.go:264: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh pgrep buildkitd: exit status 80 (191.403529ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_90b035341dad3264896227ccd5ca14ead8f761a2_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image build -t localhost/my-image:functional-20211117115319-2067 testdata/build
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image ls
functional_test.go:384: expected "localhost/my-image:functional-20211117115319-2067" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:440: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211117115319-2067 docker-env) && out/minikube-darwin-amd64 status -p functional-20211117115319-2067"
functional_test.go:440: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211117115319-2067 docker-env) && out/minikube-darwin-amd64 status -p functional-20211117115319-2067": exit status 1 (196.677836ms)

                                                
                                                
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                               │
	│    * If the above advice does not help, please let us know:                                                                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                 │
	│                                                                                                                               │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                      │
	│    * Please also attach the following file to the GitHub issue:                                                               │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_docker-env_0286061359b7d88e1c575f824495f60db2866fdd_0.log    │
	│                                                                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:446: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2: exit status 80 (189.113417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:57:06.861814    4494 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:57:06.862043    4494 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:57:06.862048    4494 out.go:310] Setting ErrFile to fd 2...
	I1117 11:57:06.862053    4494 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:57:06.862132    4494 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:57:06.862306    4494 mustload.go:65] Loading cluster: functional-20211117115319-2067
	I1117 11:57:06.862530    4494 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 11:57:06.862868    4494 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:57:06.958128    4494 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:57:06.985397    4494 out.go:176] 
	W1117 11:57:06.985572    4494 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:57:06.985588    4494 out.go:241] * 
	* 
	W1117 11:57:06.988596    4494 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 11:57:07.010246    4494 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2: exit status 80 (446.861216ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:57:07.247399    4504 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:57:07.247535    4504 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:57:07.247539    4504 out.go:310] Setting ErrFile to fd 2...
	I1117 11:57:07.247542    4504 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:57:07.247614    4504 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:57:07.247784    4504 mustload.go:65] Loading cluster: functional-20211117115319-2067
	I1117 11:57:07.248011    4504 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 11:57:07.248357    4504 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:57:07.600577    4504 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:57:07.627998    4504 out.go:176] 
	W1117 11:57:07.628197    4504 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:57:07.628214    4504 out.go:241] * 
	* 
	W1117 11:57:07.632300    4504 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 11:57:07.653784    4504 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2: exit status 80 (194.691847ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:57:07.052208    4499 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:57:07.052341    4499 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:57:07.052345    4499 out.go:310] Setting ErrFile to fd 2...
	I1117 11:57:07.052348    4499 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:57:07.052420    4499 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:57:07.052606    4499 mustload.go:65] Loading cluster: functional-20211117115319-2067
	I1117 11:57:07.052830    4499 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 11:57:07.053174    4499 cli_runner.go:115] Run: docker container inspect functional-20211117115319-2067 --format={{.State.Status}}
	W1117 11:57:07.152756    4499 cli_runner.go:162] docker container inspect functional-20211117115319-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:57:07.180386    4499 out.go:176] 
	W1117 11:57:07.180547    4499 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	W1117 11:57:07.180563    4499 out.go:241] * 
	* 
	W1117 11:57:07.184529    4499 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 11:57:07.205896    4499 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-darwin-amd64 -p functional-20211117115319-2067 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117115319-2067

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117115319-2067 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117115319-2067: (2.508196156s)
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:384: expected "gcr.io/google-containers/addon-resizer:functional-20211117115319-2067" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image save gcr.io/google-containers/addon-resizer:functional-20211117115319-2067 /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:327: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image load /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image ls
functional_test.go:384: expected "gcr.io/google-containers/addon-resizer:functional-20211117115319-2067" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:143: failed to get Kubernetes client for "functional-20211117115319-2067": client config: context "functional-20211117115319-2067" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:223: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:225: (dbg) Run:  kubectl --context functional-20211117115319-2067 get svc nginx-svc
functional_test_tunnel_test.go:225: (dbg) Non-zero exit: kubectl --context functional-20211117115319-2067 get svc nginx-svc: exit status 1 (42.10847ms)

                                                
                                                
** stderr ** 
	W1117 11:58:31.595376    4563 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117115319-2067

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:227: kubectl --context functional-20211117115319-2067 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:229: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:236: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (115.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (48.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211117115836-2067 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
ingress_addon_legacy_test.go:40: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211117115836-2067 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 80 (48.368386225s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20211117115836-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node ingress-addon-legacy-20211117115836-2067 in cluster ingress-addon-legacy-20211117115836-2067
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* docker "ingress-addon-legacy-20211117115836-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:58:36.549525    4619 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:58:36.549665    4619 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:58:36.549669    4619 out.go:310] Setting ErrFile to fd 2...
	I1117 11:58:36.549672    4619 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:58:36.549745    4619 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:58:36.550056    4619 out.go:304] Setting JSON to false
	I1117 11:58:36.573687    4619 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1691,"bootTime":1637177425,"procs":320,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 11:58:36.573777    4619 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 11:58:36.600709    4619 out.go:176] * [ingress-addon-legacy-20211117115836-2067] minikube v1.24.0 on Darwin 11.1
	I1117 11:58:36.600832    4619 notify.go:174] Checking for updates...
	I1117 11:58:36.647656    4619 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 11:58:36.673546    4619 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 11:58:36.699662    4619 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 11:58:36.725371    4619 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 11:58:36.726205    4619 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 11:58:36.815072    4619 docker.go:132] docker version: linux-20.10.5
	I1117 11:58:36.815206    4619 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:58:36.963964    4619 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 19:58:36.92375481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:58:37.011560    4619 out.go:176] * Using the docker driver based on user configuration
	I1117 11:58:37.011660    4619 start.go:280] selected driver: docker
	I1117 11:58:37.011669    4619 start.go:775] validating driver "docker" against <nil>
	I1117 11:58:37.011687    4619 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 11:58:37.015006    4619 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:58:37.160989    4619 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 19:58:37.12222187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:58:37.161086    4619 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 11:58:37.161202    4619 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 11:58:37.161219    4619 cni.go:93] Creating CNI manager for ""
	I1117 11:58:37.161225    4619 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 11:58:37.161229    4619 start_flags.go:282] config:
	{Name:ingress-addon-legacy-20211117115836-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20211117115836-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:58:37.188232    4619 out.go:176] * Starting control plane node ingress-addon-legacy-20211117115836-2067 in cluster ingress-addon-legacy-20211117115836-2067
	I1117 11:58:37.188297    4619 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 11:58:37.213824    4619 out.go:176] * Pulling base image ...
	I1117 11:58:37.213878    4619 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 11:58:37.213936    4619 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 11:58:37.282156    4619 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1117 11:58:37.282179    4619 cache.go:57] Caching tarball of preloaded images
	I1117 11:58:37.282383    4619 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 11:58:37.307881    4619 out.go:176] * Downloading Kubernetes v1.18.20 preload ...
	I1117 11:58:37.307918    4619 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 11:58:37.350519    4619 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 11:58:37.350538    4619 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 11:58:37.396545    4619 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:de306a65f7d728d77c3b068e74796a19 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1117 11:58:40.069520    4619 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 11:58:40.069669    4619 preload.go:255] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 11:58:40.846857    4619 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1117 11:58:40.847057    4619 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/ingress-addon-legacy-20211117115836-2067/config.json ...
	I1117 11:58:40.847086    4619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/ingress-addon-legacy-20211117115836-2067/config.json: {Name:mke4953bcc5022e618237b1ade50d6325e79b66c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 11:58:40.847388    4619 cache.go:206] Successfully downloaded all kic artifacts
	I1117 11:58:40.847416    4619 start.go:313] acquiring machines lock for ingress-addon-legacy-20211117115836-2067: {Name:mkc8ae1e9d5ab90f57660e8d9a111cf4021e2660 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:58:40.847535    4619 start.go:317] acquired machines lock for "ingress-addon-legacy-20211117115836-2067" in 112.508µs
	I1117 11:58:40.847558    4619 start.go:89] Provisioning new machine with config: &{Name:ingress-addon-legacy-20211117115836-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20211117115836-2067 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ControlPlane:true Worker:true}
	I1117 11:58:40.847614    4619 start.go:126] createHost starting for "" (driver="docker")
	I1117 11:58:40.903744    4619 out.go:203] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1117 11:58:40.904013    4619 start.go:160] libmachine.API.Create for "ingress-addon-legacy-20211117115836-2067" (driver="docker")
	I1117 11:58:40.904054    4619 client.go:168] LocalClient.Create starting
	I1117 11:58:40.904262    4619 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 11:58:40.904339    4619 main.go:130] libmachine: Decoding PEM data...
	I1117 11:58:40.904369    4619 main.go:130] libmachine: Parsing certificate...
	I1117 11:58:40.904467    4619 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 11:58:40.904522    4619 main.go:130] libmachine: Decoding PEM data...
	I1117 11:58:40.904542    4619 main.go:130] libmachine: Parsing certificate...
	I1117 11:58:40.905331    4619 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117115836-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 11:58:41.003395    4619 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117115836-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 11:58:41.003512    4619 network_create.go:254] running [docker network inspect ingress-addon-legacy-20211117115836-2067] to gather additional debugging logs...
	I1117 11:58:41.003532    4619 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117115836-2067
	W1117 11:58:41.098869    4619 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:58:41.098910    4619 network_create.go:257] error running [docker network inspect ingress-addon-legacy-20211117115836-2067]: docker network inspect ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:41.098940    4619 network_create.go:259] output of [docker network inspect ingress-addon-legacy-20211117115836-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20211117115836-2067
	
	** /stderr **
	I1117 11:58:41.099065    4619 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:58:41.197269    4619 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003140d0] misses:0}
	I1117 11:58:41.197305    4619 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:58:41.197326    4619 network_create.go:106] attempt to create docker network ingress-addon-legacy-20211117115836-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 11:58:41.197413    4619 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117115836-2067
	I1117 11:58:45.136617    4619 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117115836-2067: (3.939170384s)
	I1117 11:58:45.136648    4619 network_create.go:90] docker network ingress-addon-legacy-20211117115836-2067 192.168.49.0/24 created
	I1117 11:58:45.136669    4619 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20211117115836-2067" container
	I1117 11:58:45.136784    4619 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 11:58:45.231245    4619 cli_runner.go:115] Run: docker volume create ingress-addon-legacy-20211117115836-2067 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117115836-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 11:58:45.328906    4619 oci.go:102] Successfully created a docker volume ingress-addon-legacy-20211117115836-2067
	I1117 11:58:45.329041    4619 cli_runner.go:115] Run: docker run --rm --name ingress-addon-legacy-20211117115836-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117115836-2067 --entrypoint /usr/bin/test -v ingress-addon-legacy-20211117115836-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 11:58:45.798674    4619 oci.go:106] Successfully prepared a docker volume ingress-addon-legacy-20211117115836-2067
	E1117 11:58:45.798727    4619 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 11:58:45.798737    4619 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 11:58:45.798752    4619 client.go:171] LocalClient.Create took 4.894725531s
	I1117 11:58:45.798759    4619 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 11:58:45.798884    4619 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117115836-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 11:58:47.799704    4619 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:58:47.799834    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:58:47.940630    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:58:47.940726    4619 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:48.218037    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:58:48.336134    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:58:48.336240    4619 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:48.876683    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:58:48.977697    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:58:48.977783    4619 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:49.633581    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:58:49.748786    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	W1117 11:58:49.748879    4619 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	
	W1117 11:58:49.748899    4619 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:49.748910    4619 start.go:129] duration metric: createHost completed in 8.901358219s
	I1117 11:58:49.748917    4619 start.go:80] releasing machines lock for "ingress-addon-legacy-20211117115836-2067", held for 8.901442285s
	W1117 11:58:49.748933    4619 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:58:49.749540    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:49.867436    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:58:49.867504    4619 delete.go:82] Unable to get host status for ingress-addon-legacy-20211117115836-2067, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	W1117 11:58:49.867695    4619 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:58:49.867708    4619 start.go:547] Will try again in 5 seconds ...
	I1117 11:58:51.338951    4619 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117115836-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.540067244s)
	I1117 11:58:51.338967    4619 kic.go:188] duration metric: took 5.540250 seconds to extract preloaded images to volume
	I1117 11:58:54.877930    4619 start.go:313] acquiring machines lock for ingress-addon-legacy-20211117115836-2067: {Name:mkc8ae1e9d5ab90f57660e8d9a111cf4021e2660 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:58:54.878106    4619 start.go:317] acquired machines lock for "ingress-addon-legacy-20211117115836-2067" in 143.237µs
	I1117 11:58:54.878149    4619 start.go:93] Skipping create...Using existing machine configuration
	I1117 11:58:54.878161    4619 fix.go:55] fixHost starting: 
	I1117 11:58:54.878616    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:54.975677    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:58:54.975720    4619 fix.go:108] recreateIfNeeded on ingress-addon-legacy-20211117115836-2067: state= err=unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:54.975734    4619 fix.go:113] machineExists: false. err=machine does not exist
	I1117 11:58:55.000937    4619 out.go:176] * docker "ingress-addon-legacy-20211117115836-2067" container is missing, will recreate.
	I1117 11:58:55.000998    4619 delete.go:124] DEMOLISHING ingress-addon-legacy-20211117115836-2067 ...
	I1117 11:58:55.001258    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:55.098277    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:58:55.098324    4619 stop.go:75] unable to get state: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:55.098343    4619 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:55.098742    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:55.194833    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:58:55.194875    4619 delete.go:82] Unable to get host status for ingress-addon-legacy-20211117115836-2067, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:55.194970    4619 cli_runner.go:115] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20211117115836-2067
	W1117 11:58:55.291091    4619 cli_runner.go:162] docker container inspect -f {{.Id}} ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:58:55.291120    4619 kic.go:360] could not find the container ingress-addon-legacy-20211117115836-2067 to remove it. will try anyways
	I1117 11:58:55.291202    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:55.389035    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:58:55.389077    4619 oci.go:83] error getting container status, will try to delete anyways: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:55.389174    4619 cli_runner.go:115] Run: docker exec --privileged -t ingress-addon-legacy-20211117115836-2067 /bin/bash -c "sudo init 0"
	W1117 11:58:55.486902    4619 cli_runner.go:162] docker exec --privileged -t ingress-addon-legacy-20211117115836-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 11:58:55.486926    4619 oci.go:656] error shutdown ingress-addon-legacy-20211117115836-2067: docker exec --privileged -t ingress-addon-legacy-20211117115836-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:56.494391    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:56.599208    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:58:56.599254    4619 oci.go:668] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:56.599262    4619 oci.go:670] temporary error: container ingress-addon-legacy-20211117115836-2067 status is  but expect it to be exited
	I1117 11:58:56.599285    4619 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:57.064853    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:57.164884    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:58:57.164925    4619 oci.go:668] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:57.164939    4619 oci.go:670] temporary error: container ingress-addon-legacy-20211117115836-2067 status is  but expect it to be exited
	I1117 11:58:57.164961    4619 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:58.056548    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:58.157885    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:58:58.157923    4619 oci.go:668] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:58.157931    4619 oci.go:670] temporary error: container ingress-addon-legacy-20211117115836-2067 status is  but expect it to be exited
	I1117 11:58:58.157959    4619 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:58.796384    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:58:58.894480    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:58:58.894518    4619 oci.go:668] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:58:58.894526    4619 oci.go:670] temporary error: container ingress-addon-legacy-20211117115836-2067 status is  but expect it to be exited
	I1117 11:58:58.894550    4619 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:00.009375    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:59:00.108341    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:59:00.108381    4619 oci.go:668] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:00.108389    4619 oci.go:670] temporary error: container ingress-addon-legacy-20211117115836-2067 status is  but expect it to be exited
	I1117 11:59:00.108412    4619 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:01.629831    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:59:01.730205    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:59:01.730244    4619 oci.go:668] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:01.730252    4619 oci.go:670] temporary error: container ingress-addon-legacy-20211117115836-2067 status is  but expect it to be exited
	I1117 11:59:01.730274    4619 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:04.775244    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:59:04.874747    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:59:04.874789    4619 oci.go:668] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:04.874798    4619 oci.go:670] temporary error: container ingress-addon-legacy-20211117115836-2067 status is  but expect it to be exited
	I1117 11:59:04.874822    4619 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:10.657077    4619 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:59:10.756517    4619 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	I1117 11:59:10.756557    4619 oci.go:668] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:10.756567    4619 oci.go:670] temporary error: container ingress-addon-legacy-20211117115836-2067 status is  but expect it to be exited
	I1117 11:59:10.756592    4619 oci.go:87] couldn't shut down ingress-addon-legacy-20211117115836-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	 
	I1117 11:59:10.756677    4619 cli_runner.go:115] Run: docker rm -f -v ingress-addon-legacy-20211117115836-2067
	I1117 11:59:10.852326    4619 cli_runner.go:115] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20211117115836-2067
	W1117 11:59:10.946814    4619 cli_runner.go:162] docker container inspect -f {{.Id}} ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:59:10.946939    4619 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117115836-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:59:11.044037    4619 cli_runner.go:115] Run: docker network rm ingress-addon-legacy-20211117115836-2067
	I1117 11:59:13.848881    4619 cli_runner.go:168] Completed: docker network rm ingress-addon-legacy-20211117115836-2067: (2.804815445s)
	W1117 11:59:13.849185    4619 delete.go:139] delete failed (probably ok) <nil>
	I1117 11:59:13.849191    4619 fix.go:120] Sleeping 1 second for extra luck!
	I1117 11:59:14.856182    4619 start.go:126] createHost starting for "" (driver="docker")
	I1117 11:59:14.883480    4619 out.go:203] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1117 11:59:14.883674    4619 start.go:160] libmachine.API.Create for "ingress-addon-legacy-20211117115836-2067" (driver="docker")
	I1117 11:59:14.883715    4619 client.go:168] LocalClient.Create starting
	I1117 11:59:14.883871    4619 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 11:59:14.883945    4619 main.go:130] libmachine: Decoding PEM data...
	I1117 11:59:14.883969    4619 main.go:130] libmachine: Parsing certificate...
	I1117 11:59:14.884059    4619 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 11:59:14.884123    4619 main.go:130] libmachine: Decoding PEM data...
	I1117 11:59:14.884145    4619 main.go:130] libmachine: Parsing certificate...
	I1117 11:59:14.905922    4619 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117115836-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 11:59:15.003470    4619 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117115836-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 11:59:15.003570    4619 network_create.go:254] running [docker network inspect ingress-addon-legacy-20211117115836-2067] to gather additional debugging logs...
	I1117 11:59:15.003589    4619 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117115836-2067
	W1117 11:59:15.099780    4619 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:59:15.099810    4619 network_create.go:257] error running [docker network inspect ingress-addon-legacy-20211117115836-2067]: docker network inspect ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:15.099821    4619 network_create.go:259] output of [docker network inspect ingress-addon-legacy-20211117115836-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20211117115836-2067
	
	** /stderr **
	I1117 11:59:15.099917    4619 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 11:59:15.195394    4619 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003140d0] amended:false}} dirty:map[] misses:0}
	I1117 11:59:15.195431    4619 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:59:15.195598    4619 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003140d0] amended:true}} dirty:map[192.168.49.0:0xc0003140d0 192.168.58.0:0xc0003142d0] misses:0}
	I1117 11:59:15.195616    4619 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 11:59:15.195623    4619 network_create.go:106] attempt to create docker network ingress-addon-legacy-20211117115836-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 11:59:15.195703    4619 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117115836-2067
	I1117 11:59:19.015041    4619 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117115836-2067: (3.819318846s)
	I1117 11:59:19.015064    4619 network_create.go:90] docker network ingress-addon-legacy-20211117115836-2067 192.168.58.0/24 created
	I1117 11:59:19.015075    4619 kic.go:106] calculated static IP "192.168.58.2" for the "ingress-addon-legacy-20211117115836-2067" container
	I1117 11:59:19.015188    4619 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 11:59:19.112230    4619 cli_runner.go:115] Run: docker volume create ingress-addon-legacy-20211117115836-2067 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117115836-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 11:59:19.207037    4619 oci.go:102] Successfully created a docker volume ingress-addon-legacy-20211117115836-2067
	I1117 11:59:19.207165    4619 cli_runner.go:115] Run: docker run --rm --name ingress-addon-legacy-20211117115836-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117115836-2067 --entrypoint /usr/bin/test -v ingress-addon-legacy-20211117115836-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 11:59:19.674902    4619 oci.go:106] Successfully prepared a docker volume ingress-addon-legacy-20211117115836-2067
	E1117 11:59:19.674951    4619 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 11:59:19.674963    4619 client.go:171] LocalClient.Create took 4.791276033s
	I1117 11:59:19.674983    4619 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 11:59:19.675002    4619 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 11:59:19.675134    4619 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117115836-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 11:59:21.675863    4619 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:59:21.675963    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:59:21.795522    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:59:21.795624    4619 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:21.974508    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:59:22.089757    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:59:22.089857    4619 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:22.420313    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:59:22.534266    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:59:22.534355    4619 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:23.003456    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:59:23.102783    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	W1117 11:59:23.102867    4619 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	
	W1117 11:59:23.102888    4619 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:23.102904    4619 start.go:129] duration metric: createHost completed in 8.246724561s
	I1117 11:59:23.102964    4619 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 11:59:23.103022    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:59:23.202850    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:59:23.202952    4619 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:23.408122    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:59:23.526282    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:59:23.526396    4619 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:23.827354    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:59:23.941193    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	I1117 11:59:23.941282    4619 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:24.612231    4619 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067
	W1117 11:59:24.731579    4619 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067 returned with exit code 1
	W1117 11:59:24.731667    4619 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	
	W1117 11:59:24.731686    4619 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117115836-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117115836-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	I1117 11:59:24.731699    4619 fix.go:57] fixHost completed within 29.853761404s
	I1117 11:59:24.731707    4619 start.go:80] releasing machines lock for "ingress-addon-legacy-20211117115836-2067", held for 29.853810855s
	W1117 11:59:24.731866    4619 out.go:241] * Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20211117115836-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20211117115836-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 11:59:24.780429    4619 out.go:176] 
	W1117 11:59:24.780573    4619 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 11:59:24.780589    4619 out.go:241] * 
	* 
	W1117 11:59:24.781295    4619 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 11:59:24.863393    4619 out.go:176] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:42: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211117115836-2067 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 80
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (48.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (1.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117115836-2067 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117115836-2067 addons enable ingress --alsologtostderr -v=5: exit status 10 (530.31289ms)

                                                
                                                
-- stdout --
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:59:24.939923    4839 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:59:24.940142    4839 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:59:24.940147    4839 out.go:310] Setting ErrFile to fd 2...
	I1117 11:59:24.940150    4839 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:59:24.940239    4839 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:59:24.940659    4839 config.go:176] Loaded profile config "ingress-addon-legacy-20211117115836-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1117 11:59:24.940674    4839 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20211117115836-2067"
	I1117 11:59:24.940682    4839 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20211117115836-2067"
	I1117 11:59:24.940940    4839 host.go:66] Checking if "ingress-addon-legacy-20211117115836-2067" exists ...
	I1117 11:59:24.941431    4839 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}
	W1117 11:59:25.055171    4839 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}} returned with exit code 1
	W1117 11:59:25.055263    4839 host.go:54] host status for "ingress-addon-legacy-20211117115836-2067" returned error: state: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067
	W1117 11:59:25.055294    4839 addons.go:202] "ingress-addon-legacy-20211117115836-2067" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I1117 11:59:25.055332    4839 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20211117115836-2067"
	I1117 11:59:25.179272    4839 out.go:176] * Verifying ingress addon...
	W1117 11:59:25.179382    4839 loader.go:221] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 11:59:25.324199    4839 out.go:176] 
	W1117 11:59:25.324315    4839 out.go:241] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20211117115836-2067" does not exist: client config: context "ingress-addon-legacy-20211117115836-2067" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20211117115836-2067" does not exist: client config: context "ingress-addon-legacy-20211117115836-2067" does not exist]
	W1117 11:59:25.324323    4839 out.go:241] * 
	* 
	W1117 11:59:25.326131    4839 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 11:59:25.416728    4839 out.go:176] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:72: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20211117115836-2067
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20211117115836-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "ingress-addon-legacy-20211117115836-2067",
	        "Id": "0dd36ea6642ac83a7f36e8b9f5da1ca4a3ebfa50d2a7e7f71d77678cf93e8f9f",
	        "Created": "2021-11-17T19:59:15.309584731Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20211117115836-2067 -n ingress-addon-legacy-20211117115836-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20211117115836-2067 -n ingress-addon-legacy-20211117115836-2067: exit status 7 (149.922237ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:59:25.927643    4848 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20211117115836-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (1.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:157: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20211117115836-2067
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20211117115836-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "ingress-addon-legacy-20211117115836-2067",
	        "Id": "0dd36ea6642ac83a7f36e8b9f5da1ca4a3ebfa50d2a7e7f71d77678cf93e8f9f",
	        "Created": "2021-11-17T19:59:15.309584731Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20211117115836-2067 -n ingress-addon-legacy-20211117115836-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20211117115836-2067 -n ingress-addon-legacy-20211117115836-2067: exit status 7 (141.463832ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:59:26.380020    4862 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20211117115836-2067": docker container inspect ingress-addon-legacy-20211117115836-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117115836-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20211117115836-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.24s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20211117115930-2067 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-20211117115930-2067 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : exit status 80 (44.53122231s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a99ee297-582c-49c2-90c2-5b97715d8af8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-20211117115930-2067] minikube v1.24.0 on Darwin 11.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1e371bb-b91b-42bb-8ae6-8acb14e4c366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"3144bdc2-396b-4518-831e-ce96c9b5b772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig"}}
	{"specversion":"1.0","id":"76e80716-61bc-4ea7-bd12-1b28520dc4da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5cd243ca-3d7a-4841-9277-c9d38b5d42f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube"}}
	{"specversion":"1.0","id":"2890cc38-6484-40cf-8c24-76ccfa78d9ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc0a0123-217f-49d5-92cf-7ef54fc10f37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-20211117115930-2067 in cluster json-output-20211117115930-2067","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa3d1c5c-0314-47a1-8c93-34f168607573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7cef96c2-032d-4505-9d50-789ce0ff3e67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"debb5b74-4f84-40ae-bf18-c43be9960edc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"}}
	{"specversion":"1.0","id":"155e3776-c7f7-4c6c-b8b4-24bab0fe0c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"docker \"json-output-20211117115930-2067\" container is missing, will recreate.","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b06fa302-ab44-4ba7-a105-190ec9f29b3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"090cac46-06f2-4dac-943e-774a0e4ba121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start docker container. Running \"minikube delete -p json-output-20211117115930-2067\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"}}
	{"specversion":"1.0","id":"2a2dc982-624d-41e4-ae4f-f69ecc34769d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules","name":"GUEST_PROVISION","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:59:36.040037    4904 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	E1117 12:00:09.897287    4904 oci.go:173] error getting kernel modules path: Unable to locate kernel modules

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-amd64 start -p json-output-20211117115930-2067 --output=json --user=testUser --memory=2200 --wait=true --driver=docker ": exit status 80
--- FAIL: TestJSONOutput/start/Command (44.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 8 has already been assigned to another step:
Creating docker container (CPUs=2, Memory=2200MB) ...
Cannot use for:
docker "json-output-20211117115930-2067" container is missing, will recreate.
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a99ee297-582c-49c2-90c2-5b97715d8af8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20211117115930-2067] minikube v1.24.0 on Darwin 11.1",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: d1e371bb-b91b-42bb-8ae6-8acb14e4c366
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=12739"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3144bdc2-396b-4518-831e-ce96c9b5b772
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 76e80716-61bc-4ea7-bd12-1b28520dc4da
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5cd243ca-3d7a-4841-9277-c9d38b5d42f0
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2890cc38-6484-40cf-8c24-76ccfa78d9ea
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fc0a0123-217f-49d5-92cf-7ef54fc10f37
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20211117115930-2067 in cluster json-output-20211117115930-2067",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: aa3d1c5c-0314-47a1-8c93-34f168607573
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7cef96c2-032d-4505-9d50-789ce0ff3e67
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: debb5b74-4f84-40ae-bf18-c43be9960edc
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 155e3776-c7f7-4c6c-b8b4-24bab0fe0c32
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20211117115930-2067\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b06fa302-ab44-4ba7-a105-190ec9f29b3a
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 090cac46-06f2-4dac-943e-774a0e4ba121
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20211117115930-2067\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 2a2dc982-624d-41e4-ae4f-f69ecc34769d
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules",
"name": "GUEST_PROVISION",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a99ee297-582c-49c2-90c2-5b97715d8af8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20211117115930-2067] minikube v1.24.0 on Darwin 11.1",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: d1e371bb-b91b-42bb-8ae6-8acb14e4c366
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=12739"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3144bdc2-396b-4518-831e-ce96c9b5b772
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 76e80716-61bc-4ea7-bd12-1b28520dc4da
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5cd243ca-3d7a-4841-9277-c9d38b5d42f0
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2890cc38-6484-40cf-8c24-76ccfa78d9ea
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fc0a0123-217f-49d5-92cf-7ef54fc10f37
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20211117115930-2067 in cluster json-output-20211117115930-2067",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: aa3d1c5c-0314-47a1-8c93-34f168607573
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7cef96c2-032d-4505-9d50-789ce0ff3e67
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: debb5b74-4f84-40ae-bf18-c43be9960edc
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 155e3776-c7f7-4c6c-b8b4-24bab0fe0c32
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20211117115930-2067\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b06fa302-ab44-4ba7-a105-190ec9f29b3a
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 090cac46-06f2-4dac-943e-774a0e4ba121
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20211117115930-2067\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 2a2dc982-624d-41e4-ae4f-f69ecc34769d
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules",
"name": "GUEST_PROVISION",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.18s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20211117115930-2067 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p json-output-20211117115930-2067 --output=json --user=testUser: exit status 80 (175.829762ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b73d62b-ebf7-40a8-a3fe-73763a549c7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"state: unknown state \"json-output-20211117115930-2067\": docker container inspect json-output-20211117115930-2067 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117115930-2067","name":"GUEST_STATUS","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-amd64 pause -p json-output-20211117115930-2067 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (0.18s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20211117115930-2067 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-amd64 unpause -p json-output-20211117115930-2067 --output=json --user=testUser: exit status 80 (550.364681ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "json-output-20211117115930-2067": docker container inspect json-output-20211117115930-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20211117115930-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-amd64 unpause -p json-output-20211117115930-2067 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (14.68s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20211117115930-2067 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p json-output-20211117115930-2067 --output=json --user=testUser: exit status 82 (14.683933495s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4ca39f0c-62bc-49f0-b388-1f262f6564a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117115930-2067\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"0b27b01b-1d8d-4c39-89d3-55a369d20d1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117115930-2067\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"17cef972-a502-4920-923a-aa2bc06f40f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117115930-2067\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"228cf9f5-4483-4596-910e-37c7947de7f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117115930-2067\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"99ad25ed-752a-4341-a336-dd722b80b988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117115930-2067\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"a84651a9-c08f-45aa-8a0e-92f8fe1a02ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117115930-2067\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"07dd9feb-3981-4745-82a2-23c6f86ba360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"82","issues":"","message":"docker container inspect json-output-20211117115930-2067 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117115930-2067","name":"GUEST_STOP_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-amd64 stop -p json-output-20211117115930-2067 --output=json --user=testUser": exit status 82
--- FAIL: TestJSONOutput/stop/Command (14.68s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:114: step 0 has already been assigned to another step:
Stopping node "json-output-20211117115930-2067"  ...
Cannot use for:
Stopping node "json-output-20211117115930-2067"  ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 4ca39f0c-62bc-49f0-b388-1f262f6564a8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0b27b01b-1d8d-4c39-89d3-55a369d20d1d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 17cef972-a502-4920-923a-aa2bc06f40f5
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 228cf9f5-4483-4596-910e-37c7947de7f8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 99ad25ed-752a-4341-a336-dd722b80b988
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a84651a9-c08f-45aa-8a0e-92f8fe1a02ab
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 07dd9feb-3981-4745-82a2-23c6f86ba360
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20211117115930-2067 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117115930-2067",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 4ca39f0c-62bc-49f0-b388-1f262f6564a8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0b27b01b-1d8d-4c39-89d3-55a369d20d1d
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 17cef972-a502-4920-923a-aa2bc06f40f5
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 228cf9f5-4483-4596-910e-37c7947de7f8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 99ad25ed-752a-4341-a336-dd722b80b988
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a84651a9-c08f-45aa-8a0e-92f8fe1a02ab
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117115930-2067\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 07dd9feb-3981-4745-82a2-23c6f86ba360
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20211117115930-2067 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117115930-2067",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (95.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20211117120035-2067 --network=
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20211117120035-2067 --network=: (1m30.016243077s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:107: docker-network-20211117120035-2067 network is not listed by [[docker network ls --format {{.Name}}]]: 
-- stdout --
	bridge
	host
	none

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "docker-network-20211117120035-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20211117120035-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20211117120035-2067: (5.274954246s)
--- FAIL: TestKicCustomNetwork/create_custom_network (95.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (45.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20211117120454-2067 --memory=2048 --mount --driver=docker 
mount_start_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-20211117120454-2067 --memory=2048 --mount --driver=docker : exit status 80 (45.05198934s)

                                                
                                                
-- stdout --
	* [mount-start-1-20211117120454-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node mount-start-1-20211117120454-2067 in cluster mount-start-1-20211117120454-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-1-20211117120454-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:04:59.924228    6070 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:05:33.910886    6070 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-1-20211117120454-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:79: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-20211117120454-2067 --memory=2048 --mount --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20211117120454-2067
helpers_test.go:235: (dbg) docker inspect mount-start-1-20211117120454-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-1-20211117120454-2067",
	        "Id": "1b6e30567a279dc86a836a5c28dfafc065217d829db33d0ffa96a587d0c215c9",
	        "Created": "2021-11-17T20:05:29.512021748Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-20211117120454-2067 -n mount-start-1-20211117120454-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-20211117120454-2067 -n mount-start-1-20211117120454-2067: exit status 7 (149.95636ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:05:40.083368    6294 status.go:247] status error: host: state: unknown state "mount-start-1-20211117120454-2067": docker container inspect mount-start-1-20211117120454-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117120454-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20211117120454-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountFirst (45.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (45.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20211117120454-2067 --memory=2048 --mount --driver=docker 
mount_start_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-2-20211117120454-2067 --memory=2048 --mount --driver=docker : exit status 80 (45.461261911s)

                                                
                                                
-- stdout --
	* [mount-start-2-20211117120454-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node mount-start-2-20211117120454-2067 in cluster mount-start-2-20211117120454-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-2-20211117120454-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:05:45.838492    6299 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:06:19.910746    6299 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-2-20211117120454-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:79: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-2-20211117120454-2067 --memory=2048 --mount --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117120454-2067
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117120454-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117120454-2067",
	        "Id": "4f264d13428a968f6203e3142d95663b4dd2b0b6db91f1ec11633f50a0fd6d5a",
	        "Created": "2021-11-17T20:06:15.473787303Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067: exit status 7 (155.520464ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:06:25.807619    6528 status.go:247] status error: host: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117120454-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountSecond (45.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.64s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20211117120454-2067 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-20211117120454-2067 ssh ls /minikube-host: exit status 80 (378.026328ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-1-20211117120454-2067": docker container inspect mount-start-1-20211117120454-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117120454-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_1bcea4236c355dc0a83fd7bb6da859e41ac1c109_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-20211117120454-2067 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20211117120454-2067
helpers_test.go:235: (dbg) docker inspect mount-start-1-20211117120454-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-1-20211117120454-2067",
	        "Id": "1b6e30567a279dc86a836a5c28dfafc065217d829db33d0ffa96a587d0c215c9",
	        "Created": "2021-11-17T20:05:29.512021748Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-20211117120454-2067 -n mount-start-1-20211117120454-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-20211117120454-2067 -n mount-start-1-20211117120454-2067: exit status 7 (153.000694ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:06:26.446209    6542 status.go:247] status error: host: state: unknown state "mount-start-1-20211117120454-2067": docker container inspect mount-start-1-20211117120454-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117120454-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20211117120454-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountFirst (0.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host: exit status 80 (197.211386ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_1bcea4236c355dc0a83fd7bb6da859e41ac1c109_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117120454-2067
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117120454-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117120454-2067",
	        "Id": "4f264d13428a968f6203e3142d95663b4dd2b0b6db91f1ec11633f50a0fd6d5a",
	        "Created": "2021-11-17T20:06:15.473787303Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067: exit status 7 (145.077391ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:06:26.893427    6556 status.go:247] status error: host: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117120454-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host: exit status 80 (195.722004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_1bcea4236c355dc0a83fd7bb6da859e41ac1c109_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostDelete]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117120454-2067
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117120454-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T20:05:45Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "mount-start-2-20211117120454-2067"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/mount-start-2-20211117120454-2067/_data",
	        "Name": "mount-start-2-20211117120454-2067",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067: exit status 7 (157.200351ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:06:34.336534    6617 status.go:247] status error: host: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117120454-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountPostDelete (0.46s)

                                                
                                    
x
+
TestMountStart/serial/Stop (14.98s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20211117120454-2067
mount_start_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p mount-start-2-20211117120454-2067: exit status 82 (14.708178113s)

                                                
                                                
-- stdout --
	* Stopping node "mount-start-2-20211117120454-2067"  ...
	* Stopping node "mount-start-2-20211117120454-2067"  ...
	* Stopping node "mount-start-2-20211117120454-2067"  ...
	* Stopping node "mount-start-2-20211117120454-2067"  ...
	* Stopping node "mount-start-2-20211117120454-2067"  ...
	* Stopping node "mount-start-2-20211117120454-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect mount-start-2-20211117120454-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:101: stop failed: "out/minikube-darwin-amd64 stop -p mount-start-2-20211117120454-2067" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117120454-2067
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117120454-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T20:05:45Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "mount-start-2-20211117120454-2067"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/mount-start-2-20211117120454-2067/_data",
	        "Name": "mount-start-2-20211117120454-2067",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067: exit status 7 (165.347126ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:06:49.312529    6651 status.go:247] status error: host: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117120454-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/Stop (14.98s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (66.5s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20211117120454-2067
mount_start_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-2-20211117120454-2067: exit status 80 (1m6.160775018s)

                                                
                                                
-- stdout --
	* [mount-start-2-20211117120454-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node mount-start-2-20211117120454-2067 in cluster mount-start-2-20211117120454-2067
	* Pulling base image ...
	* docker "mount-start-2-20211117120454-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-2-20211117120454-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:07:13.495629    6656 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:07:49.648012    6656 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-2-20211117120454-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:112: restart failed: "out/minikube-darwin-amd64 start -p mount-start-2-20211117120454-2067" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/RestartStopped]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117120454-2067
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117120454-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117120454-2067",
	        "Id": "8b2c370ad7393d7179b5c5653e7a5da92de0758a68b450aa336d3359233910e9",
	        "Created": "2021-11-17T20:07:45.233386775Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067: exit status 7 (209.266855ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:07:55.815583    6971 status.go:247] status error: host: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117120454-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/RestartStopped (66.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.48s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host: exit status 80 (218.248474ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_1bcea4236c355dc0a83fd7bb6da859e41ac1c109_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-20211117120454-2067 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117120454-2067
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117120454-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117120454-2067",
	        "Id": "8b2c370ad7393d7179b5c5653e7a5da92de0758a68b450aa336d3359233910e9",
	        "Created": "2021-11-17T20:07:45.233386775Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117120454-2067 -n mount-start-2-20211117120454-2067: exit status 7 (146.6111ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:07:56.294448    6985 status.go:247] status error: host: state: unknown state "mount-start-2-20211117120454-2067": docker container inspect mount-start-2-20211117120454-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117120454-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117120454-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (45.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117120800-2067 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:82: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117120800-2067 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 80 (44.99960155s)

                                                
                                                
-- stdout --
	* [multinode-20211117120800-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117120800-2067 in cluster multinode-20211117120800-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117120800-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:08:01.006395    7048 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:08:01.006525    7048 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:01.006530    7048 out.go:310] Setting ErrFile to fd 2...
	I1117 12:08:01.006533    7048 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:01.006611    7048 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:08:01.006911    7048 out.go:304] Setting JSON to false
	I1117 12:08:01.030567    7048 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":2256,"bootTime":1637177425,"procs":319,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:08:01.030662    7048 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:08:01.057822    7048 out.go:176] * [multinode-20211117120800-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:08:01.057972    7048 notify.go:174] Checking for updates...
	I1117 12:08:01.106293    7048 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:08:01.133660    7048 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:08:01.159255    7048 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:08:01.185085    7048 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:08:01.185302    7048 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:08:01.274683    7048 docker.go:132] docker version: linux-20.10.5
	I1117 12:08:01.274829    7048 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:08:01.430484    7048 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:08:01.384542871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:08:01.457632    7048 out.go:176] * Using the docker driver based on user configuration
	I1117 12:08:01.457690    7048 start.go:280] selected driver: docker
	I1117 12:08:01.457704    7048 start.go:775] validating driver "docker" against <nil>
	I1117 12:08:01.457743    7048 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:08:01.461316    7048 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:08:01.613261    7048 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:08:01.570331126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:08:01.613353    7048 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:08:01.613485    7048 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:08:01.613499    7048 cni.go:93] Creating CNI manager for ""
	I1117 12:08:01.613504    7048 cni.go:154] 0 nodes found, recommending kindnet
	I1117 12:08:01.613516    7048 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 12:08:01.613521    7048 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 12:08:01.613526    7048 start_flags.go:277] Found "CNI" CNI - setting NetworkPlugin=cni
	I1117 12:08:01.613535    7048 start_flags.go:282] config:
	{Name:multinode-20211117120800-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117120800-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:08:01.662307    7048 out.go:176] * Starting control plane node multinode-20211117120800-2067 in cluster multinode-20211117120800-2067
	I1117 12:08:01.662405    7048 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:08:01.688365    7048 out.go:176] * Pulling base image ...
	I1117 12:08:01.688486    7048 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:08:01.688534    7048 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:08:01.688575    7048 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:08:01.688603    7048 cache.go:57] Caching tarball of preloaded images
	I1117 12:08:01.688810    7048 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:08:01.688835    7048 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:08:01.691192    7048 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/multinode-20211117120800-2067/config.json ...
	I1117 12:08:01.691270    7048 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/multinode-20211117120800-2067/config.json: {Name:mk19e0c27f49149fe9084e93ed840f401c62af9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:08:01.803862    7048 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:08:01.803900    7048 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:08:01.803913    7048 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:08:01.803965    7048 start.go:313] acquiring machines lock for multinode-20211117120800-2067: {Name:mkad1352d1520800be4d619e3690050418979e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:08:01.804105    7048 start.go:317] acquired machines lock for "multinode-20211117120800-2067" in 129.95µs
	I1117 12:08:01.804136    7048 start.go:89] Provisioning new machine with config: &{Name:multinode-20211117120800-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117120800-2067 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:08:01.804202    7048 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:08:01.851596    7048 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:08:01.851813    7048 start.go:160] libmachine.API.Create for "multinode-20211117120800-2067" (driver="docker")
	I1117 12:08:01.851840    7048 client.go:168] LocalClient.Create starting
	I1117 12:08:01.851948    7048 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:08:01.851989    7048 main.go:130] libmachine: Decoding PEM data...
	I1117 12:08:01.852009    7048 main.go:130] libmachine: Parsing certificate...
	I1117 12:08:01.852084    7048 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:08:01.852114    7048 main.go:130] libmachine: Decoding PEM data...
	I1117 12:08:01.852123    7048 main.go:130] libmachine: Parsing certificate...
	I1117 12:08:01.852655    7048 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:08:01.954553    7048 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:08:01.954670    7048 network_create.go:254] running [docker network inspect multinode-20211117120800-2067] to gather additional debugging logs...
	I1117 12:08:01.954687    7048 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067
	W1117 12:08:02.056610    7048 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:02.056635    7048 network_create.go:257] error running [docker network inspect multinode-20211117120800-2067]: docker network inspect multinode-20211117120800-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117120800-2067
	I1117 12:08:02.056649    7048 network_create.go:259] output of [docker network inspect multinode-20211117120800-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117120800-2067
	
	** /stderr **
	I1117 12:08:02.056734    7048 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:08:02.159989    7048 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000112430] misses:0}
	I1117 12:08:02.160026    7048 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:08:02.160042    7048 network_create.go:106] attempt to create docker network multinode-20211117120800-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:08:02.160119    7048 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067
	I1117 12:08:06.026353    7048 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067: (3.866224377s)
	I1117 12:08:06.026378    7048 network_create.go:90] docker network multinode-20211117120800-2067 192.168.49.0/24 created
	I1117 12:08:06.026392    7048 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117120800-2067" container
	I1117 12:08:06.026500    7048 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:08:06.127098    7048 cli_runner.go:115] Run: docker volume create multinode-20211117120800-2067 --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:08:06.230305    7048 oci.go:102] Successfully created a docker volume multinode-20211117120800-2067
	I1117 12:08:06.230439    7048 cli_runner.go:115] Run: docker run --rm --name multinode-20211117120800-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --entrypoint /usr/bin/test -v multinode-20211117120800-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:08:06.727002    7048 oci.go:106] Successfully prepared a docker volume multinode-20211117120800-2067
	I1117 12:08:06.727056    7048 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	E1117 12:08:06.727057    7048 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:08:06.727074    7048 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:08:06.727081    7048 client.go:171] LocalClient.Create took 4.87527965s
	I1117 12:08:06.727179    7048 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:08:08.732058    7048 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:08:08.732263    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:08.867659    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:08.867763    7048 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:09.147899    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:09.261525    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:09.261609    7048 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:09.802017    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:09.912472    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:09.912559    7048 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:10.573531    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:10.702475    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:08:10.702570    7048 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:08:10.702592    7048 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:10.702601    7048 start.go:129] duration metric: createHost completed in 8.89847693s
	I1117 12:08:10.702609    7048 start.go:80] releasing machines lock for "multinode-20211117120800-2067", held for 8.898579242s
	W1117 12:08:10.702625    7048 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:08:10.703139    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:10.822556    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:10.822604    7048 delete.go:82] Unable to get host status for multinode-20211117120800-2067, assuming it has already been deleted: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	W1117 12:08:10.822740    7048 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:08:10.822759    7048 start.go:547] Will try again in 5 seconds ...
	I1117 12:08:12.442373    7048 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.715209651s)
	I1117 12:08:12.442397    7048 kic.go:188] duration metric: took 5.715369 seconds to extract preloaded images to volume
	I1117 12:08:15.826359    7048 start.go:313] acquiring machines lock for multinode-20211117120800-2067: {Name:mkad1352d1520800be4d619e3690050418979e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:08:15.826535    7048 start.go:317] acquired machines lock for "multinode-20211117120800-2067" in 146.71µs
	I1117 12:08:15.826577    7048 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:08:15.826591    7048 fix.go:55] fixHost starting: 
	I1117 12:08:15.827039    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:15.928762    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:15.928821    7048 fix.go:108] recreateIfNeeded on multinode-20211117120800-2067: state= err=unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:15.928849    7048 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:08:15.955882    7048 out.go:176] * docker "multinode-20211117120800-2067" container is missing, will recreate.
	I1117 12:08:15.955911    7048 delete.go:124] DEMOLISHING multinode-20211117120800-2067 ...
	I1117 12:08:15.956120    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:16.077697    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:08:16.077743    7048 stop.go:75] unable to get state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:16.077757    7048 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:16.078141    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:16.179810    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:16.179865    7048 delete.go:82] Unable to get host status for multinode-20211117120800-2067, assuming it has already been deleted: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:16.179953    7048 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:08:16.283511    7048 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:16.283540    7048 kic.go:360] could not find the container multinode-20211117120800-2067 to remove it. will try anyways
	I1117 12:08:16.283619    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:16.387805    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:08:16.387848    7048 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:16.387945    7048 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0"
	W1117 12:08:16.488749    7048 cli_runner.go:162] docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:08:16.488773    7048 oci.go:656] error shutdown multinode-20211117120800-2067: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:17.492965    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:17.597853    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:17.597894    7048 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:17.597913    7048 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:08:17.597934    7048 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:18.067970    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:18.172306    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:18.172345    7048 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:18.172353    7048 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:08:18.172376    7048 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:19.067975    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:19.173536    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:19.173574    7048 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:19.173594    7048 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:08:19.173617    7048 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:19.817921    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:19.919690    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:19.919731    7048 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:19.919739    7048 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:08:19.919760    7048 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:21.037949    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:21.145628    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:21.145667    7048 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:21.145675    7048 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:08:21.145697    7048 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:22.666970    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:22.771782    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:22.771825    7048 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:22.771835    7048 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:08:22.771858    7048 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:25.819971    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:25.923505    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:25.923555    7048 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:25.923564    7048 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:08:25.923585    7048 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:31.709444    7048 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:31.811099    7048 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:31.811140    7048 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:31.811149    7048 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:08:31.811174    7048 oci.go:87] couldn't shut down multinode-20211117120800-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	 
	I1117 12:08:31.811269    7048 cli_runner.go:115] Run: docker rm -f -v multinode-20211117120800-2067
	I1117 12:08:31.912371    7048 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:08:32.010596    7048 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:32.010713    7048 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:08:32.111178    7048 cli_runner.go:115] Run: docker network rm multinode-20211117120800-2067
	I1117 12:08:34.800010    7048 cli_runner.go:168] Completed: docker network rm multinode-20211117120800-2067: (2.688806499s)
	W1117 12:08:34.800289    7048 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:08:34.800296    7048 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:08:35.802117    7048 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:08:35.829602    7048 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:08:35.829746    7048 start.go:160] libmachine.API.Create for "multinode-20211117120800-2067" (driver="docker")
	I1117 12:08:35.829779    7048 client.go:168] LocalClient.Create starting
	I1117 12:08:35.829948    7048 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:08:35.830020    7048 main.go:130] libmachine: Decoding PEM data...
	I1117 12:08:35.830044    7048 main.go:130] libmachine: Parsing certificate...
	I1117 12:08:35.830128    7048 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:08:35.830170    7048 main.go:130] libmachine: Decoding PEM data...
	I1117 12:08:35.830182    7048 main.go:130] libmachine: Parsing certificate...
	I1117 12:08:35.830945    7048 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:08:35.933253    7048 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:08:35.933417    7048 network_create.go:254] running [docker network inspect multinode-20211117120800-2067] to gather additional debugging logs...
	I1117 12:08:35.933448    7048 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067
	W1117 12:08:36.036102    7048 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:36.036124    7048 network_create.go:257] error running [docker network inspect multinode-20211117120800-2067]: docker network inspect multinode-20211117120800-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117120800-2067
	I1117 12:08:36.036137    7048 network_create.go:259] output of [docker network inspect multinode-20211117120800-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117120800-2067
	
	** /stderr **
	I1117 12:08:36.036225    7048 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:08:36.137198    7048 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000112430] amended:false}} dirty:map[] misses:0}
	I1117 12:08:36.137236    7048 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:08:36.137411    7048 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000112430] amended:true}} dirty:map[192.168.49.0:0xc000112430 192.168.58.0:0xc0005c00d0] misses:0}
	I1117 12:08:36.137429    7048 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:08:36.137448    7048 network_create.go:106] attempt to create docker network multinode-20211117120800-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:08:36.137542    7048 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067
	I1117 12:08:39.979597    7048 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067: (3.842023156s)
	I1117 12:08:39.979625    7048 network_create.go:90] docker network multinode-20211117120800-2067 192.168.58.0/24 created
	I1117 12:08:39.979640    7048 kic.go:106] calculated static IP "192.168.58.2" for the "multinode-20211117120800-2067" container
	I1117 12:08:39.979769    7048 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:08:40.081441    7048 cli_runner.go:115] Run: docker volume create multinode-20211117120800-2067 --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:08:40.183903    7048 oci.go:102] Successfully created a docker volume multinode-20211117120800-2067
	I1117 12:08:40.184056    7048 cli_runner.go:115] Run: docker run --rm --name multinode-20211117120800-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --entrypoint /usr/bin/test -v multinode-20211117120800-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:08:40.599479    7048 oci.go:106] Successfully prepared a docker volume multinode-20211117120800-2067
	E1117 12:08:40.599534    7048 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:08:40.599541    7048 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:08:40.599547    7048 client.go:171] LocalClient.Create took 4.769803624s
	I1117 12:08:40.599561    7048 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:08:40.599708    7048 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:08:42.605279    7048 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:08:42.605443    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:42.751107    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:42.751240    7048 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:42.930060    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:43.050989    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:43.051078    7048 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:43.383723    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:43.502822    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:43.502935    7048 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:43.972527    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:44.088184    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:08:44.088272    7048 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:08:44.088292    7048 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:44.088303    7048 start.go:129] duration metric: createHost completed in 8.286218643s
	I1117 12:08:44.088377    7048 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:08:44.088439    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:44.206123    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:44.206235    7048 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:44.411666    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:44.531702    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:44.531809    7048 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:44.829583    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:44.961990    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:08:44.962072    7048 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:45.626291    7048 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:08:45.738077    7048 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:08:45.738180    7048 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:08:45.738204    7048 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:08:45.738219    7048 fix.go:57] fixHost completed within 29.911905968s
	I1117 12:08:45.738231    7048 start.go:80] releasing machines lock for "multinode-20211117120800-2067", held for 29.91196095s
	W1117 12:08:45.738386    7048 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117120800-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117120800-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:08:45.819897    7048 out.go:176] 
	W1117 12:08:45.820047    7048 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:08:45.820060    7048 out.go:241] * 
	* 
	W1117 12:08:45.820856    7048 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:08:45.945250    7048 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:84: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-20211117120800-2067 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "03db1702f4ff1ac7cf9d70ee3249b375af57492cc58c6cd18f61f1206a4a4d75",
	        "Created": "2021-11-17T20:08:36.247710996Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (160.285449ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:46.643914    7275 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (45.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:463: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (74.920034ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-20211117120800-2067" does not exist

                                                
                                                
** /stderr **
multinode_test.go:465: failed to create busybox deployment to multinode cluster
multinode_test.go:468: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- rollout status deployment/busybox
multinode_test.go:468: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- rollout status deployment/busybox: exit status 1 (72.205487ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117120800-2067"

                                                
                                                
** /stderr **
multinode_test.go:470: failed to deploy busybox to multinode cluster
multinode_test.go:474: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (74.559401ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117120800-2067"

                                                
                                                
** /stderr **
multinode_test.go:476: failed to retrieve Pod IPs
multinode_test.go:480: expected 2 Pod IPs but got 1
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (74.923886ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117120800-2067"

                                                
                                                
** /stderr **
multinode_test.go:488: failed get Pod names
multinode_test.go:494: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- exec  -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- exec  -- nslookup kubernetes.io: exit status 1 (72.715811ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117120800-2067"

                                                
                                                
** /stderr **
multinode_test.go:496: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:504: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- exec  -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- exec  -- nslookup kubernetes.default: exit status 1 (72.629903ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117120800-2067"

                                                
                                                
** /stderr **
multinode_test.go:506: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:512: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (74.307539ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117120800-2067"

                                                
                                                
** /stderr **
multinode_test.go:514: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "03db1702f4ff1ac7cf9d70ee3249b375af57492cc58c6cd18f61f1206a4a4d75",
	        "Created": "2021-11-17T20:08:36.247710996Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (143.310209ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:47.418259    7298 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:522: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117120800-2067 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (71.585088ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117120800-2067"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "03db1702f4ff1ac7cf9d70ee3249b375af57492cc58c6cd18f61f1206a4a4d75",
	        "Created": "2021-11-17T20:08:36.247710996Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (147.357436ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:47.747102    7309 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20211117120800-2067 -v 3 --alsologtostderr
multinode_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20211117120800-2067 -v 3 --alsologtostderr: exit status 80 (211.630714ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:08:47.787468    7314 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:08:47.787709    7314 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:47.787714    7314 out.go:310] Setting ErrFile to fd 2...
	I1117 12:08:47.787717    7314 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:47.787779    7314 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:08:47.787964    7314 mustload.go:65] Loading cluster: multinode-20211117120800-2067
	I1117 12:08:47.788196    7314 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:08:47.788532    7314 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:47.893108    7314 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:47.933416    7314 out.go:176] 
	W1117 12:08:47.933592    7314 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:08:47.933610    7314 out.go:241] * 
	* 
	W1117 12:08:47.936760    7314 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:08:47.958159    7314 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:109: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-20211117120800-2067 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "03db1702f4ff1ac7cf9d70ee3249b375af57492cc58c6cd18f61f1206a4a4d75",
	        "Created": "2021-11-17T20:08:36.247710996Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (148.087421ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:48.215283    7323 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:152: expected profile "multinode-20211117120800-2067" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-20211117120800-2067\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-20211117120800-2067\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFS
Share\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.22.3\",\"ClusterName\":\"multinode-20211117120800-2067\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"ExtraOptions\":[{\"Component\":\"kubelet\",\"Key\":\"cni-conf-dir\",\"Value\":\"/etc/cni/net.mk\"}],\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.22.3\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\"}}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "03db1702f4ff1ac7cf9d70ee3249b375af57492cc58c6cd18f61f1206a4a4d75",
	        "Created": "2021-11-17T20:08:36.247710996Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (146.303888ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:48.778856    7341 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --output json --alsologtostderr
multinode_test.go:170: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --output json --alsologtostderr: exit status 7 (145.380276ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-20211117120800-2067","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:08:48.821068    7346 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:08:48.821202    7346 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:48.821207    7346 out.go:310] Setting ErrFile to fd 2...
	I1117 12:08:48.821210    7346 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:48.821285    7346 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:08:48.821451    7346 out.go:304] Setting JSON to true
	I1117 12:08:48.821465    7346 mustload.go:65] Loading cluster: multinode-20211117120800-2067
	I1117 12:08:48.821694    7346 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:08:48.821707    7346 status.go:253] checking status of multinode-20211117120800-2067 ...
	I1117 12:08:48.822051    7346 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:48.924235    7346 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:48.924317    7346 status.go:328] multinode-20211117120800-2067 host status = "" (err=state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	)
	I1117 12:08:48.924340    7346 status.go:255] multinode-20211117120800-2067 status: &{Name:multinode-20211117120800-2067 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 12:08:48.924365    7346 status.go:258] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	E1117 12:08:48.924373    7346 status.go:261] The "multinode-20211117120800-2067" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:177: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "03db1702f4ff1ac7cf9d70ee3249b375af57492cc58c6cd18f61f1206a4a4d75",
	        "Created": "2021-11-17T20:08:36.247710996Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (145.344417ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:49.181538    7355 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node stop m03
multinode_test.go:192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node stop m03: exit status 85 (95.345536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:194: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node stop m03": exit status 85
multinode_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status: exit status 7 (152.909859ms)

                                                
                                                
-- stdout --
	multinode-20211117120800-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:49.422374    7361 status.go:258] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	E1117 12:08:49.422382    7361 status.go:261] The "multinode-20211117120800-2067" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr: exit status 7 (144.989771ms)

                                                
                                                
-- stdout --
	multinode-20211117120800-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:08:49.471610    7366 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:08:49.471737    7366 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:49.471742    7366 out.go:310] Setting ErrFile to fd 2...
	I1117 12:08:49.471745    7366 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:49.471823    7366 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:08:49.471998    7366 out.go:304] Setting JSON to false
	I1117 12:08:49.472013    7366 mustload.go:65] Loading cluster: multinode-20211117120800-2067
	I1117 12:08:49.472244    7366 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:08:49.472256    7366 status.go:253] checking status of multinode-20211117120800-2067 ...
	I1117 12:08:49.472601    7366 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:08:49.575205    7366 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:08:49.575265    7366 status.go:328] multinode-20211117120800-2067 host status = "" (err=state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	)
	I1117 12:08:49.575281    7366 status.go:255] multinode-20211117120800-2067 status: &{Name:multinode-20211117120800-2067 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 12:08:49.575308    7366 status.go:258] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	E1117 12:08:49.575312    7366 status.go:261] The "multinode-20211117120800-2067" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:211: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr": multinode-20211117120800-2067
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:215: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr": multinode-20211117120800-2067
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:219: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr": multinode-20211117120800-2067
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "03db1702f4ff1ac7cf9d70ee3249b375af57492cc58c6cd18f61f1206a4a4d75",
	        "Created": "2021-11-17T20:08:36.247710996Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (146.02956ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:49.832144    7375 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:226: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node start m03 --alsologtostderr
multinode_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node start m03 --alsologtostderr: exit status 85 (98.549864ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:08:49.964469    7383 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:08:49.964684    7383 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:49.964689    7383 out.go:310] Setting ErrFile to fd 2...
	I1117 12:08:49.964692    7383 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:08:49.964774    7383 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:08:49.964954    7383 mustload.go:65] Loading cluster: multinode-20211117120800-2067
	I1117 12:08:49.965181    7383 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:08:49.992731    7383 out.go:176] 
	W1117 12:08:49.992903    7383 out.go:241] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	W1117 12:08:49.992915    7383 out.go:241] * 
	* 
	W1117 12:08:49.995942    7383 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:08:50.018207    7383 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:238: I1117 12:08:49.964469    7383 out.go:297] Setting OutFile to fd 1 ...
I1117 12:08:49.964684    7383 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 12:08:49.964689    7383 out.go:310] Setting ErrFile to fd 2...
I1117 12:08:49.964692    7383 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 12:08:49.964774    7383 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
I1117 12:08:49.964954    7383 mustload.go:65] Loading cluster: multinode-20211117120800-2067
I1117 12:08:49.965181    7383 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 12:08:49.992731    7383 out.go:176] 
W1117 12:08:49.992903    7383 out.go:241] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
W1117 12:08:49.992915    7383 out.go:241] * 
* 
W1117 12:08:49.995942    7383 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1117 12:08:50.018207    7383 out.go:176] 
multinode_test.go:239: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node start m03 --alsologtostderr": exit status 85
multinode_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status
multinode_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status: exit status 7 (146.56957ms)

                                                
                                                
-- stdout --
	multinode-20211117120800-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:50.169732    7384 status.go:258] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	E1117 12:08:50.169748    7384 status.go:261] The "multinode-20211117120800-2067" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:245: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "03db1702f4ff1ac7cf9d70ee3249b375af57492cc58c6cd18f61f1206a4a4d75",
	        "Created": "2021-11-17T20:08:36.247710996Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (143.10857ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:08:50.421686    7393 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117120800-2067
multinode_test.go:272: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20211117120800-2067
multinode_test.go:272: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-20211117120800-2067: exit status 82 (14.735110358s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20211117120800-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:274: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-20211117120800-2067" : exit status 82
multinode_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117120800-2067 --wait=true -v=8 --alsologtostderr
multinode_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117120800-2067 --wait=true -v=8 --alsologtostderr: exit status 80 (1m8.740581512s)

                                                
                                                
-- stdout --
	* [multinode-20211117120800-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20211117120800-2067 in cluster multinode-20211117120800-2067
	* Pulling base image ...
	* docker "multinode-20211117120800-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117120800-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:09:05.241797    7426 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:09:05.241933    7426 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:09:05.241938    7426 out.go:310] Setting ErrFile to fd 2...
	I1117 12:09:05.241941    7426 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:09:05.242014    7426 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:09:05.242291    7426 out.go:304] Setting JSON to false
	I1117 12:09:05.265972    7426 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":2320,"bootTime":1637177425,"procs":319,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:09:05.266058    7426 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:09:05.293439    7426 out.go:176] * [multinode-20211117120800-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:09:05.293696    7426 notify.go:174] Checking for updates...
	I1117 12:09:05.341714    7426 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:09:05.367610    7426 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:09:05.393878    7426 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:09:05.419560    7426 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:09:05.419920    7426 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:09:05.419954    7426 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:09:05.510123    7426 docker.go:132] docker version: linux-20.10.5
	I1117 12:09:05.510231    7426 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:09:05.661310    7426 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:09:05.61991393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:09:05.688334    7426 out.go:176] * Using the docker driver based on existing profile
	I1117 12:09:05.688452    7426 start.go:280] selected driver: docker
	I1117 12:09:05.688461    7426 start.go:775] validating driver "docker" against &{Name:multinode-20211117120800-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117120800-2067 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:09:05.688558    7426 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:09:05.688942    7426 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:09:05.841530    7426 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:09:05.800448609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:09:05.843488    7426 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:09:05.843516    7426 cni.go:93] Creating CNI manager for ""
	I1117 12:09:05.843521    7426 cni.go:154] 1 nodes found, recommending kindnet
	I1117 12:09:05.843537    7426 start_flags.go:282] config:
	{Name:multinode-20211117120800-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117120800-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:09:05.870425    7426 out.go:176] * Starting control plane node multinode-20211117120800-2067 in cluster multinode-20211117120800-2067
	I1117 12:09:05.870502    7426 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:09:05.918156    7426 out.go:176] * Pulling base image ...
	I1117 12:09:05.918229    7426 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:09:05.918298    7426 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:09:05.918329    7426 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:09:05.918363    7426 cache.go:57] Caching tarball of preloaded images
	I1117 12:09:05.918617    7426 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:09:05.918639    7426 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:09:05.919816    7426 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/multinode-20211117120800-2067/config.json ...
	I1117 12:09:06.032850    7426 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:09:06.032875    7426 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:09:06.032889    7426 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:09:06.032944    7426 start.go:313] acquiring machines lock for multinode-20211117120800-2067: {Name:mkad1352d1520800be4d619e3690050418979e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:09:06.033037    7426 start.go:317] acquired machines lock for "multinode-20211117120800-2067" in 74.062µs
	I1117 12:09:06.033063    7426 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:09:06.033072    7426 fix.go:55] fixHost starting: 
	I1117 12:09:06.033376    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:06.133972    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:06.134025    7426 fix.go:108] recreateIfNeeded on multinode-20211117120800-2067: state= err=unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:06.134045    7426 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:09:06.160763    7426 out.go:176] * docker "multinode-20211117120800-2067" container is missing, will recreate.
	I1117 12:09:06.160783    7426 delete.go:124] DEMOLISHING multinode-20211117120800-2067 ...
	I1117 12:09:06.160895    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:06.261433    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:09:06.261473    7426 stop.go:75] unable to get state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:06.261485    7426 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:06.261884    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:06.362045    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:06.362090    7426 delete.go:82] Unable to get host status for multinode-20211117120800-2067, assuming it has already been deleted: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:06.362187    7426 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:09:06.463854    7426 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:06.463887    7426 kic.go:360] could not find the container multinode-20211117120800-2067 to remove it. will try anyways
	I1117 12:09:06.463996    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:06.565370    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:09:06.565410    7426 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:06.565534    7426 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0"
	W1117 12:09:06.666973    7426 cli_runner.go:162] docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:09:06.666999    7426 oci.go:656] error shutdown multinode-20211117120800-2067: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:07.668706    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:07.774581    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:07.774631    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:07.774642    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:07.774673    7426 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:08.337283    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:08.446176    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:08.446215    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:08.446223    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:08.446254    7426 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:09.526905    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:09.632444    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:09.632493    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:09.632501    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:09.632522    7426 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:10.943109    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:11.048638    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:11.048686    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:11.048695    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:11.048717    7426 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:12.634841    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:12.738881    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:12.738922    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:12.738930    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:12.738953    7426 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:15.089804    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:15.192250    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:15.192291    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:15.192298    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:15.192319    7426 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:19.704609    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:19.805967    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:19.806007    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:19.806029    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:19.806055    7426 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:23.028354    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:23.129317    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:23.129356    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:23.129365    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:23.129390    7426 oci.go:87] couldn't shut down multinode-20211117120800-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	 
	I1117 12:09:23.129480    7426 cli_runner.go:115] Run: docker rm -f -v multinode-20211117120800-2067
	I1117 12:09:23.229934    7426 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:09:23.328586    7426 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:23.328705    7426 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:09:23.429636    7426 cli_runner.go:115] Run: docker network rm multinode-20211117120800-2067
	I1117 12:09:26.149412    7426 cli_runner.go:168] Completed: docker network rm multinode-20211117120800-2067: (2.71975371s)
	W1117 12:09:26.149687    7426 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:09:26.149693    7426 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:09:27.159816    7426 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:09:27.187312    7426 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:09:27.187500    7426 start.go:160] libmachine.API.Create for "multinode-20211117120800-2067" (driver="docker")
	I1117 12:09:27.187565    7426 client.go:168] LocalClient.Create starting
	I1117 12:09:27.187798    7426 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:09:27.187882    7426 main.go:130] libmachine: Decoding PEM data...
	I1117 12:09:27.187914    7426 main.go:130] libmachine: Parsing certificate...
	I1117 12:09:27.188031    7426 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:09:27.188099    7426 main.go:130] libmachine: Decoding PEM data...
	I1117 12:09:27.188115    7426 main.go:130] libmachine: Parsing certificate...
	I1117 12:09:27.188968    7426 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:09:27.291890    7426 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:09:27.291999    7426 network_create.go:254] running [docker network inspect multinode-20211117120800-2067] to gather additional debugging logs...
	I1117 12:09:27.292022    7426 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067
	W1117 12:09:27.392717    7426 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:27.392742    7426 network_create.go:257] error running [docker network inspect multinode-20211117120800-2067]: docker network inspect multinode-20211117120800-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117120800-2067
	I1117 12:09:27.392756    7426 network_create.go:259] output of [docker network inspect multinode-20211117120800-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117120800-2067
	
	** /stderr **
	I1117 12:09:27.392859    7426 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:09:27.496583    7426 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000310060] misses:0}
	I1117 12:09:27.496620    7426 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:09:27.496637    7426 network_create.go:106] attempt to create docker network multinode-20211117120800-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:09:27.496716    7426 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067
	I1117 12:09:31.311514    7426 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067: (3.814770161s)
	I1117 12:09:31.311543    7426 network_create.go:90] docker network multinode-20211117120800-2067 192.168.49.0/24 created
	I1117 12:09:31.311565    7426 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117120800-2067" container
	I1117 12:09:31.311684    7426 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:09:31.412572    7426 cli_runner.go:115] Run: docker volume create multinode-20211117120800-2067 --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:09:31.514915    7426 oci.go:102] Successfully created a docker volume multinode-20211117120800-2067
	I1117 12:09:31.515042    7426 cli_runner.go:115] Run: docker run --rm --name multinode-20211117120800-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --entrypoint /usr/bin/test -v multinode-20211117120800-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:09:31.928157    7426 oci.go:106] Successfully prepared a docker volume multinode-20211117120800-2067
	E1117 12:09:31.928212    7426 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:09:31.928227    7426 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:09:31.928235    7426 client.go:171] LocalClient.Create took 4.740703089s
	I1117 12:09:31.928253    7426 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:09:31.928379    7426 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:09:33.933928    7426 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:09:33.934060    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:09:34.076322    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:34.076435    7426 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:34.226655    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:09:34.369524    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:34.369610    7426 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:34.675099    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:09:34.791596    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:34.791678    7426 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:35.367941    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:09:35.484179    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:09:35.484278    7426 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:09:35.484295    7426 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:35.484307    7426 start.go:129] duration metric: createHost completed in 8.324520006s
	I1117 12:09:35.484373    7426 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:09:35.484441    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:09:35.600409    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:35.600518    7426 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:35.779554    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:09:35.898580    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:35.898692    7426 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:36.229179    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:09:36.350129    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:36.350246    7426 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:36.812750    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:09:36.933832    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:09:36.933923    7426 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:09:36.933945    7426 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:36.933953    7426 fix.go:57] fixHost completed within 30.901169592s
	I1117 12:09:36.933961    7426 start.go:80] releasing machines lock for "multinode-20211117120800-2067", held for 30.901202002s
	W1117 12:09:36.933978    7426 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:09:36.934104    7426 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:09:36.934113    7426 start.go:547] Will try again in 5 seconds ...
	I1117 12:09:38.443750    7426 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.515408923s)
	I1117 12:09:38.443775    7426 kic.go:188] duration metric: took 6.515585 seconds to extract preloaded images to volume
	I1117 12:09:41.942108    7426 start.go:313] acquiring machines lock for multinode-20211117120800-2067: {Name:mkad1352d1520800be4d619e3690050418979e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:09:41.942272    7426 start.go:317] acquired machines lock for "multinode-20211117120800-2067" in 132.12µs
	I1117 12:09:41.942314    7426 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:09:41.942323    7426 fix.go:55] fixHost starting: 
	I1117 12:09:41.942789    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:42.048151    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:42.048205    7426 fix.go:108] recreateIfNeeded on multinode-20211117120800-2067: state= err=unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:42.048215    7426 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:09:42.096848    7426 out.go:176] * docker "multinode-20211117120800-2067" container is missing, will recreate.
	I1117 12:09:42.096896    7426 delete.go:124] DEMOLISHING multinode-20211117120800-2067 ...
	I1117 12:09:42.097086    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:42.198924    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:09:42.198962    7426 stop.go:75] unable to get state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:42.198974    7426 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:42.199355    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:42.300729    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:42.300775    7426 delete.go:82] Unable to get host status for multinode-20211117120800-2067, assuming it has already been deleted: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:42.300867    7426 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:09:42.402364    7426 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:42.402389    7426 kic.go:360] could not find the container multinode-20211117120800-2067 to remove it. will try anyways
	I1117 12:09:42.402468    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:42.507281    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:09:42.507319    7426 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:42.507409    7426 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0"
	W1117 12:09:42.609228    7426 cli_runner.go:162] docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:09:42.609252    7426 oci.go:656] error shutdown multinode-20211117120800-2067: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:43.619196    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:43.724067    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:43.724107    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:43.724124    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:43.724148    7426 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:44.123826    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:44.228540    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:44.228586    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:44.228603    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:44.228626    7426 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:44.828078    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:44.934306    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:44.934372    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:44.934385    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:44.934408    7426 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:46.264147    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:46.368653    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:46.368694    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:46.368701    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:46.368724    7426 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:47.589917    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:47.691102    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:47.691141    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:47.691151    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:47.691173    7426 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:49.473584    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:49.577342    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:49.577384    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:49.577393    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:49.577413    7426 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:52.847365    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:52.951623    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:52.951667    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:52.951685    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:52.951706    7426 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:59.059874    7426 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:09:59.163590    7426 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:09:59.163637    7426 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:09:59.163647    7426 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:09:59.163672    7426 oci.go:87] couldn't shut down multinode-20211117120800-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	 
	I1117 12:09:59.163758    7426 cli_runner.go:115] Run: docker rm -f -v multinode-20211117120800-2067
	I1117 12:09:59.286813    7426 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:09:59.385759    7426 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:09:59.385894    7426 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:09:59.486299    7426 cli_runner.go:115] Run: docker network rm multinode-20211117120800-2067
	I1117 12:10:02.330280    7426 cli_runner.go:168] Completed: docker network rm multinode-20211117120800-2067: (2.843945425s)
	W1117 12:10:02.330852    7426 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:10:02.330859    7426 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:10:03.340980    7426 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:10:03.368456    7426 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:10:03.368631    7426 start.go:160] libmachine.API.Create for "multinode-20211117120800-2067" (driver="docker")
	I1117 12:10:03.368668    7426 client.go:168] LocalClient.Create starting
	I1117 12:10:03.368825    7426 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:10:03.368918    7426 main.go:130] libmachine: Decoding PEM data...
	I1117 12:10:03.368943    7426 main.go:130] libmachine: Parsing certificate...
	I1117 12:10:03.369053    7426 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:10:03.369107    7426 main.go:130] libmachine: Decoding PEM data...
	I1117 12:10:03.369124    7426 main.go:130] libmachine: Parsing certificate...
	I1117 12:10:03.370036    7426 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:10:03.473200    7426 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:10:03.473311    7426 network_create.go:254] running [docker network inspect multinode-20211117120800-2067] to gather additional debugging logs...
	I1117 12:10:03.473332    7426 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067
	W1117 12:10:03.573632    7426 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:03.573664    7426 network_create.go:257] error running [docker network inspect multinode-20211117120800-2067]: docker network inspect multinode-20211117120800-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117120800-2067
	I1117 12:10:03.573680    7426 network_create.go:259] output of [docker network inspect multinode-20211117120800-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117120800-2067
	
	** /stderr **
	I1117 12:10:03.573804    7426 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:10:03.674837    7426 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000310060] amended:false}} dirty:map[] misses:0}
	I1117 12:10:03.674871    7426 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:10:03.675081    7426 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000310060] amended:true}} dirty:map[192.168.49.0:0xc000310060 192.168.58.0:0xc00000e668] misses:0}
	I1117 12:10:03.675098    7426 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:10:03.675105    7426 network_create.go:106] attempt to create docker network multinode-20211117120800-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:10:03.675188    7426 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067
	I1117 12:10:07.547840    7426 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067: (3.872646878s)
	I1117 12:10:07.547862    7426 network_create.go:90] docker network multinode-20211117120800-2067 192.168.58.0/24 created
	I1117 12:10:07.547873    7426 kic.go:106] calculated static IP "192.168.58.2" for the "multinode-20211117120800-2067" container
	I1117 12:10:07.547986    7426 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:10:07.647796    7426 cli_runner.go:115] Run: docker volume create multinode-20211117120800-2067 --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:10:07.747472    7426 oci.go:102] Successfully created a docker volume multinode-20211117120800-2067
	I1117 12:10:07.747657    7426 cli_runner.go:115] Run: docker run --rm --name multinode-20211117120800-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --entrypoint /usr/bin/test -v multinode-20211117120800-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:10:08.155159    7426 oci.go:106] Successfully prepared a docker volume multinode-20211117120800-2067
	E1117 12:10:08.155215    7426 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:10:08.155225    7426 client.go:171] LocalClient.Create took 4.786595415s
	I1117 12:10:08.155244    7426 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:10:08.155262    7426 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:10:08.155362    7426 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:10:10.155732    7426 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:10:10.155858    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:10.314182    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:10.314305    7426 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:10.513211    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:10.629202    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:10.629297    7426 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:10.928529    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:11.045024    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:11.045136    7426 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:11.757879    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:11.876796    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:10:11.876906    7426 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:10:11.876932    7426 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:11.876946    7426 start.go:129] duration metric: createHost completed in 8.535994612s
	I1117 12:10:11.877023    7426 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:10:11.877096    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:11.995845    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:11.995926    7426 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:12.341168    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:12.467902    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:12.468012    7426 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:12.917077    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:13.036126    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:13.036246    7426 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:13.612428    7426 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:13.713469    7426 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:10:13.713552    7426 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:10:13.713569    7426 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:13.713578    7426 fix.go:57] fixHost completed within 31.771551007s
	I1117 12:10:13.713589    7426 start.go:80] releasing machines lock for "multinode-20211117120800-2067", held for 31.771599907s
	W1117 12:10:13.713722    7426 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117120800-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117120800-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:10:13.821385    7426 out.go:176] 
	W1117 12:10:13.821587    7426 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:10:13.821613    7426 out.go:241] * 
	* 
	W1117 12:10:13.822695    7426 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:10:13.915284    7426 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:279: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-20211117120800-2067" : exit status 80
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117120800-2067
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "ee2e2d1a673f4896b913e66c3f481857cf16f526d78a5dffc96799d08b0f674b",
	        "Created": "2021-11-17T20:10:03.783923476Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (206.296083ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:10:14.459477    7736 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (84.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node delete m03
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node delete m03: exit status 80 (332.477958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:378: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 node delete m03": exit status 80
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr: exit status 7 (144.981052ms)

                                                
                                                
-- stdout --
	multinode-20211117120800-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:10:14.835410    7746 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:10:14.835541    7746 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:10:14.835545    7746 out.go:310] Setting ErrFile to fd 2...
	I1117 12:10:14.835554    7746 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:10:14.835632    7746 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:10:14.835803    7746 out.go:304] Setting JSON to false
	I1117 12:10:14.835817    7746 mustload.go:65] Loading cluster: multinode-20211117120800-2067
	I1117 12:10:14.836043    7746 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:10:14.836055    7746 status.go:253] checking status of multinode-20211117120800-2067 ...
	I1117 12:10:14.836398    7746 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:14.937180    7746 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:14.937239    7746 status.go:328] multinode-20211117120800-2067 host status = "" (err=state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	)
	I1117 12:10:14.937278    7746 status.go:255] multinode-20211117120800-2067 status: &{Name:multinode-20211117120800-2067 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 12:10:14.937303    7746 status.go:258] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	E1117 12:10:14.937307    7746 status.go:261] The "multinode-20211117120800-2067" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:384: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "ee2e2d1a673f4896b913e66c3f481857cf16f526d78a5dffc96799d08b0f674b",
	        "Created": "2021-11-17T20:10:03.783923476Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (146.262976ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:10:15.188147    7755 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 stop
multinode_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 stop: exit status 82 (14.780279569s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	* Stopping node "multinode-20211117120800-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20211117120800-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:298: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 stop": exit status 82
multinode_test.go:302: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status: exit status 7 (142.198793ms)

                                                
                                                
-- stdout --
	multinode-20211117120800-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:10:30.110780    7785 status.go:258] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	E1117 12:10:30.110788    7785 status.go:261] The "multinode-20211117120800-2067" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:309: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr: exit status 7 (140.236746ms)

                                                
                                                
-- stdout --
	multinode-20211117120800-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:10:30.151198    7790 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:10:30.151334    7790 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:10:30.151339    7790 out.go:310] Setting ErrFile to fd 2...
	I1117 12:10:30.151342    7790 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:10:30.151416    7790 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:10:30.151584    7790 out.go:304] Setting JSON to false
	I1117 12:10:30.151598    7790 mustload.go:65] Loading cluster: multinode-20211117120800-2067
	I1117 12:10:30.151831    7790 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:10:30.151844    7790 status.go:253] checking status of multinode-20211117120800-2067 ...
	I1117 12:10:30.152189    7790 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:30.251052    7790 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:30.251109    7790 status.go:328] multinode-20211117120800-2067 host status = "" (err=state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	)
	I1117 12:10:30.251127    7790 status.go:255] multinode-20211117120800-2067 status: &{Name:multinode-20211117120800-2067 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 12:10:30.251144    7790 status.go:258] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	E1117 12:10:30.251148    7790 status.go:261] The "multinode-20211117120800-2067" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:315: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr": multinode-20211117120800-2067
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:319: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-20211117120800-2067 status --alsologtostderr": multinode-20211117120800-2067
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "ee2e2d1a673f4896b913e66c3f481857cf16f526d78a5dffc96799d08b0f674b",
	        "Created": "2021-11-17T20:10:03.783923476Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (143.775461ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:10:30.498806    7799 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (69.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:326: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117120800-2067 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:336: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117120800-2067 --wait=true -v=8 --alsologtostderr --driver=docker : exit status 80 (1m9.261361467s)

                                                
                                                
-- stdout --
	* [multinode-20211117120800-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20211117120800-2067 in cluster multinode-20211117120800-2067
	* Pulling base image ...
	* docker "multinode-20211117120800-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117120800-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:10:30.628606    7807 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:10:30.628799    7807 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:10:30.628804    7807 out.go:310] Setting ErrFile to fd 2...
	I1117 12:10:30.628807    7807 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:10:30.628882    7807 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:10:30.629132    7807 out.go:304] Setting JSON to false
	I1117 12:10:30.652784    7807 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":2405,"bootTime":1637177425,"procs":319,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:10:30.652874    7807 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:10:30.679783    7807 out.go:176] * [multinode-20211117120800-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:10:30.679971    7807 notify.go:174] Checking for updates...
	I1117 12:10:30.728359    7807 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:10:30.754352    7807 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:10:30.780523    7807 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:10:30.806089    7807 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:10:30.806446    7807 config.go:176] Loaded profile config "multinode-20211117120800-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:10:30.806803    7807 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:10:30.896011    7807 docker.go:132] docker version: linux-20.10.5
	I1117 12:10:30.896123    7807 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:10:31.047135    7807 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:10:31.002582713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:10:31.096116    7807 out.go:176] * Using the docker driver based on existing profile
	I1117 12:10:31.096165    7807 start.go:280] selected driver: docker
	I1117 12:10:31.096177    7807 start.go:775] validating driver "docker" against &{Name:multinode-20211117120800-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117120800-2067 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:10:31.096305    7807 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:10:31.096666    7807 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:10:31.247376    7807 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:10:31.203575725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:10:31.249350    7807 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:10:31.249374    7807 cni.go:93] Creating CNI manager for ""
	I1117 12:10:31.249378    7807 cni.go:154] 1 nodes found, recommending kindnet
	I1117 12:10:31.249389    7807 start_flags.go:282] config:
	{Name:multinode-20211117120800-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117120800-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:10:31.276399    7807 out.go:176] * Starting control plane node multinode-20211117120800-2067 in cluster multinode-20211117120800-2067
	I1117 12:10:31.276512    7807 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:10:31.351060    7807 out.go:176] * Pulling base image ...
	I1117 12:10:31.351198    7807 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:10:31.351211    7807 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:10:31.351296    7807 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:10:31.351329    7807 cache.go:57] Caching tarball of preloaded images
	I1117 12:10:31.352074    7807 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:10:31.352280    7807 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:10:31.352693    7807 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/multinode-20211117120800-2067/config.json ...
	I1117 12:10:31.467103    7807 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:10:31.467117    7807 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:10:31.467128    7807 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:10:31.467171    7807 start.go:313] acquiring machines lock for multinode-20211117120800-2067: {Name:mkad1352d1520800be4d619e3690050418979e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:10:31.467262    7807 start.go:317] acquired machines lock for "multinode-20211117120800-2067" in 73.606µs
	I1117 12:10:31.467284    7807 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:10:31.467292    7807 fix.go:55] fixHost starting: 
	I1117 12:10:31.467539    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:31.567620    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:31.567680    7807 fix.go:108] recreateIfNeeded on multinode-20211117120800-2067: state= err=unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:31.567700    7807 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:10:31.595003    7807 out.go:176] * docker "multinode-20211117120800-2067" container is missing, will recreate.
	I1117 12:10:31.595065    7807 delete.go:124] DEMOLISHING multinode-20211117120800-2067 ...
	I1117 12:10:31.595272    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:31.697951    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:10:31.697997    7807 stop.go:75] unable to get state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:31.698013    7807 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:31.698397    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:31.797588    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:31.797648    7807 delete.go:82] Unable to get host status for multinode-20211117120800-2067, assuming it has already been deleted: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:31.797740    7807 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:10:31.898791    7807 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:31.898827    7807 kic.go:360] could not find the container multinode-20211117120800-2067 to remove it. will try anyways
	I1117 12:10:31.898906    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:32.001370    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:10:32.001421    7807 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:32.001508    7807 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0"
	W1117 12:10:32.103144    7807 cli_runner.go:162] docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:10:32.103169    7807 oci.go:656] error shutdown multinode-20211117120800-2067: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:33.113505    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:33.214712    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:33.214754    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:33.214771    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:10:33.214802    7807 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:33.770273    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:33.875347    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:33.875386    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:33.875395    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:10:33.875414    7807 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:34.966189    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:35.071850    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:35.071889    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:35.071900    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:10:35.071920    7807 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:36.382504    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:36.484458    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:36.484508    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:36.484517    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:10:36.484541    7807 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:38.073382    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:38.175508    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:38.175559    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:38.175570    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:10:38.175592    7807 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:40.521496    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:40.630904    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:40.630945    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:40.630955    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:10:40.630981    7807 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:45.139774    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:45.243066    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:45.243105    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:45.243113    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:10:45.243132    7807 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:48.471745    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:10:48.575049    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:10:48.575086    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:48.575093    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:10:48.575116    7807 oci.go:87] couldn't shut down multinode-20211117120800-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	 
	I1117 12:10:48.575195    7807 cli_runner.go:115] Run: docker rm -f -v multinode-20211117120800-2067
	I1117 12:10:48.675236    7807 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:10:48.775574    7807 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:48.775682    7807 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:10:48.875345    7807 cli_runner.go:115] Run: docker network rm multinode-20211117120800-2067
	I1117 12:10:51.672075    7807 cli_runner.go:168] Completed: docker network rm multinode-20211117120800-2067: (2.796715768s)
	W1117 12:10:51.672359    7807 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:10:51.672366    7807 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:10:52.682500    7807 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:10:52.709870    7807 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:10:52.710111    7807 start.go:160] libmachine.API.Create for "multinode-20211117120800-2067" (driver="docker")
	I1117 12:10:52.710154    7807 client.go:168] LocalClient.Create starting
	I1117 12:10:52.710353    7807 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:10:52.710434    7807 main.go:130] libmachine: Decoding PEM data...
	I1117 12:10:52.710467    7807 main.go:130] libmachine: Parsing certificate...
	I1117 12:10:52.710617    7807 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:10:52.710679    7807 main.go:130] libmachine: Decoding PEM data...
	I1117 12:10:52.710696    7807 main.go:130] libmachine: Parsing certificate...
	I1117 12:10:52.712371    7807 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:10:52.814972    7807 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:10:52.815070    7807 network_create.go:254] running [docker network inspect multinode-20211117120800-2067] to gather additional debugging logs...
	I1117 12:10:52.815088    7807 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067
	W1117 12:10:52.914813    7807 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:52.914838    7807 network_create.go:257] error running [docker network inspect multinode-20211117120800-2067]: docker network inspect multinode-20211117120800-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117120800-2067
	I1117 12:10:52.914857    7807 network_create.go:259] output of [docker network inspect multinode-20211117120800-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117120800-2067
	
	** /stderr **
	I1117 12:10:52.914958    7807 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:10:53.016243    7807 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b10150] misses:0}
	I1117 12:10:53.016283    7807 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:10:53.016299    7807 network_create.go:106] attempt to create docker network multinode-20211117120800-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:10:53.016384    7807 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067
	I1117 12:10:56.830336    7807 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067: (3.813939376s)
	I1117 12:10:56.830360    7807 network_create.go:90] docker network multinode-20211117120800-2067 192.168.49.0/24 created
	I1117 12:10:56.830377    7807 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117120800-2067" container
	I1117 12:10:56.830485    7807 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:10:56.930851    7807 cli_runner.go:115] Run: docker volume create multinode-20211117120800-2067 --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:10:57.030994    7807 oci.go:102] Successfully created a docker volume multinode-20211117120800-2067
	I1117 12:10:57.031141    7807 cli_runner.go:115] Run: docker run --rm --name multinode-20211117120800-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --entrypoint /usr/bin/test -v multinode-20211117120800-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:10:57.435875    7807 oci.go:106] Successfully prepared a docker volume multinode-20211117120800-2067
	E1117 12:10:57.435938    7807 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:10:57.435954    7807 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:10:57.435956    7807 client.go:171] LocalClient.Create took 4.725838667s
	I1117 12:10:57.435977    7807 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:10:57.436106    7807 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:10:59.436175    7807 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:10:59.436271    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:10:59.579799    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:10:59.579945    7807 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:10:59.729802    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:00.216047    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:00.216134    7807 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:00.521687    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:00.636810    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:00.636896    7807 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:01.208619    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:01.332347    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:11:01.332443    7807 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:11:01.332469    7807 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:01.332481    7807 start.go:129] duration metric: createHost completed in 8.650018286s
	I1117 12:11:01.332549    7807 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:11:01.332625    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:01.443886    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:01.443969    7807 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:01.626695    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:01.744796    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:01.744884    7807 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:02.075341    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:02.201831    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:02.201921    7807 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:02.667958    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:02.786978    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:11:02.787081    7807 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:11:02.787110    7807 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:02.787122    7807 fix.go:57] fixHost completed within 31.320121681s
	I1117 12:11:02.787139    7807 start.go:80] releasing machines lock for "multinode-20211117120800-2067", held for 31.320159517s
	W1117 12:11:02.787163    7807 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:11:02.787309    7807 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:11:02.787319    7807 start.go:547] Will try again in 5 seconds ...
	I1117 12:11:03.801172    7807 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.365098718s)
	I1117 12:11:03.801194    7807 kic.go:188] duration metric: took 6.365278 seconds to extract preloaded images to volume
	I1117 12:11:07.790583    7807 start.go:313] acquiring machines lock for multinode-20211117120800-2067: {Name:mkad1352d1520800be4d619e3690050418979e5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:11:07.790756    7807 start.go:317] acquired machines lock for "multinode-20211117120800-2067" in 139.148µs
	I1117 12:11:07.790796    7807 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:11:07.790805    7807 fix.go:55] fixHost starting: 
	I1117 12:11:07.791313    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:07.892685    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:07.892723    7807 fix.go:108] recreateIfNeeded on multinode-20211117120800-2067: state= err=unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:07.892735    7807 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:11:07.944568    7807 out.go:176] * docker "multinode-20211117120800-2067" container is missing, will recreate.
	I1117 12:11:07.944640    7807 delete.go:124] DEMOLISHING multinode-20211117120800-2067 ...
	I1117 12:11:07.944849    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:08.046018    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:11:08.046059    7807 stop.go:75] unable to get state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:08.046084    7807 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:08.046477    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:08.147476    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:08.147533    7807 delete.go:82] Unable to get host status for multinode-20211117120800-2067, assuming it has already been deleted: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:08.147652    7807 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:11:08.248811    7807 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:08.248849    7807 kic.go:360] could not find the container multinode-20211117120800-2067 to remove it. will try anyways
	I1117 12:11:08.248974    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:08.352006    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:11:08.352049    7807 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:08.352141    7807 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0"
	W1117 12:11:08.452658    7807 cli_runner.go:162] docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:11:08.452688    7807 oci.go:656] error shutdown multinode-20211117120800-2067: docker exec --privileged -t multinode-20211117120800-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:09.462822    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:09.574043    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:09.574084    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:09.574095    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:11:09.574115    7807 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:09.972293    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:10.075377    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:10.075422    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:10.075432    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:11:10.075450    7807 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:10.679417    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:10.784147    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:10.784190    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:10.784198    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:11:10.784220    7807 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:12.120152    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:12.223526    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:12.223573    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:12.223582    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:11:12.223603    7807 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:13.440618    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:13.544435    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:13.544484    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:13.544513    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:11:13.544542    7807 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:15.326647    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:15.429916    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:15.429957    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:15.429967    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:11:15.429986    7807 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:18.707386    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:18.809733    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:18.809783    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:18.809794    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:11:18.809821    7807 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:24.912917    7807 cli_runner.go:115] Run: docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}
	W1117 12:11:25.013638    7807 cli_runner.go:162] docker container inspect multinode-20211117120800-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:11:25.013677    7807 oci.go:668] temporary error verifying shutdown: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:25.013687    7807 oci.go:670] temporary error: container multinode-20211117120800-2067 status is  but expect it to be exited
	I1117 12:11:25.013713    7807 oci.go:87] couldn't shut down multinode-20211117120800-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	 
	I1117 12:11:25.013797    7807 cli_runner.go:115] Run: docker rm -f -v multinode-20211117120800-2067
	I1117 12:11:25.113992    7807 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117120800-2067
	W1117 12:11:25.214520    7807 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:25.214639    7807 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:11:25.316839    7807 cli_runner.go:115] Run: docker network rm multinode-20211117120800-2067
	I1117 12:11:28.155200    7807 cli_runner.go:168] Completed: docker network rm multinode-20211117120800-2067: (2.838330915s)
	W1117 12:11:28.155482    7807 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:11:28.155489    7807 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:11:29.162611    7807 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:11:29.210716    7807 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:11:29.210919    7807 start.go:160] libmachine.API.Create for "multinode-20211117120800-2067" (driver="docker")
	I1117 12:11:29.210949    7807 client.go:168] LocalClient.Create starting
	I1117 12:11:29.211191    7807 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:11:29.211295    7807 main.go:130] libmachine: Decoding PEM data...
	I1117 12:11:29.211316    7807 main.go:130] libmachine: Parsing certificate...
	I1117 12:11:29.211402    7807 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:11:29.211456    7807 main.go:130] libmachine: Decoding PEM data...
	I1117 12:11:29.211473    7807 main.go:130] libmachine: Parsing certificate...
	I1117 12:11:29.212617    7807 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:11:29.336148    7807 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:11:29.336249    7807 network_create.go:254] running [docker network inspect multinode-20211117120800-2067] to gather additional debugging logs...
	I1117 12:11:29.336277    7807 cli_runner.go:115] Run: docker network inspect multinode-20211117120800-2067
	W1117 12:11:29.436446    7807 cli_runner.go:162] docker network inspect multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:29.436470    7807 network_create.go:257] error running [docker network inspect multinode-20211117120800-2067]: docker network inspect multinode-20211117120800-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117120800-2067
	I1117 12:11:29.436482    7807 network_create.go:259] output of [docker network inspect multinode-20211117120800-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117120800-2067
	
	** /stderr **
	I1117 12:11:29.436575    7807 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:11:29.536680    7807 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b10150] amended:false}} dirty:map[] misses:0}
	I1117 12:11:29.536711    7807 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:11:29.536893    7807 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b10150] amended:true}} dirty:map[192.168.49.0:0xc000b10150 192.168.58.0:0xc000b10290] misses:0}
	I1117 12:11:29.536905    7807 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:11:29.536912    7807 network_create.go:106] attempt to create docker network multinode-20211117120800-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:11:29.536995    7807 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067
	I1117 12:11:33.411294    7807 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117120800-2067: (3.874282665s)
	I1117 12:11:33.411326    7807 network_create.go:90] docker network multinode-20211117120800-2067 192.168.58.0/24 created
	I1117 12:11:33.411342    7807 kic.go:106] calculated static IP "192.168.58.2" for the "multinode-20211117120800-2067" container
	I1117 12:11:33.411461    7807 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:11:33.511192    7807 cli_runner.go:115] Run: docker volume create multinode-20211117120800-2067 --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:11:33.610357    7807 oci.go:102] Successfully created a docker volume multinode-20211117120800-2067
	I1117 12:11:33.610486    7807 cli_runner.go:115] Run: docker run --rm --name multinode-20211117120800-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117120800-2067 --entrypoint /usr/bin/test -v multinode-20211117120800-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:11:34.026887    7807 oci.go:106] Successfully prepared a docker volume multinode-20211117120800-2067
	E1117 12:11:34.026946    7807 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:11:34.026959    7807 client.go:171] LocalClient.Create took 4.816047691s
	I1117 12:11:34.026974    7807 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:11:34.026991    7807 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:11:34.027105    7807 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117120800-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:11:36.027246    7807 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:11:36.027352    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:36.155482    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:36.155608    7807 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:36.354148    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:36.481263    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:36.481366    7807 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:36.790224    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:36.907338    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:36.907417    7807 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:37.612198    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:37.730160    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:11:37.730268    7807 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:11:37.730285    7807 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:37.730298    7807 start.go:129] duration metric: createHost completed in 8.567717154s
	I1117 12:11:37.730368    7807 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:11:37.730459    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:37.849929    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:37.850031    7807 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:38.191763    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:38.322391    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:38.322476    7807 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:38.771637    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:38.891450    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	I1117 12:11:38.891552    7807 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:39.469247    7807 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067
	W1117 12:11:39.581790    7807 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067 returned with exit code 1
	W1117 12:11:39.581872    7807 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	W1117 12:11:39.581887    7807 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117120800-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117120800-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	I1117 12:11:39.581904    7807 fix.go:57] fixHost completed within 31.791394939s
	I1117 12:11:39.581913    7807 start.go:80] releasing machines lock for "multinode-20211117120800-2067", held for 31.791438321s
	W1117 12:11:39.582069    7807 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117120800-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117120800-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:11:39.704139    7807 out.go:176] 
	W1117 12:11:39.704358    7807 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:11:39.704382    7807 out.go:241] * 
	* 
	W1117 12:11:39.705587    7807 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:11:39.832423    7807 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:338: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-20211117120800-2067 --wait=true -v=8 --alsologtostderr --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117120800-2067",
	        "Id": "cd4e222ec6a75e8eb8c8485ce5faf7a6eed3f0d1cca3db72d2f276faa2a6f979",
	        "Created": "2021-11-17T20:11:29.642549992Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (161.640607ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:11:40.133687    8141 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (69.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (102.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117120800-2067
multinode_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117120800-2067-m01 --driver=docker 
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117120800-2067-m01 --driver=docker : exit status 80 (45.993091736s)

                                                
                                                
-- stdout --
	* [multinode-20211117120800-2067-m01] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117120800-2067-m01 in cluster multinode-20211117120800-2067-m01
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	* docker "multinode-20211117120800-2067-m01" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:11:46.092809    8147 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:12:20.699747    8147 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117120800-2067-m01" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117120800-2067-m02 --driver=docker 
multinode_test.go:442: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117120800-2067-m02 --driver=docker : exit status 80 (45.46240599s)

                                                
                                                
-- stdout --
	* [multinode-20211117120800-2067-m02] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117120800-2067-m02 in cluster multinode-20211117120800-2067-m02
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	* docker "multinode-20211117120800-2067-m02" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:12:32.024281    8382 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:13:06.338670    8382 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117120800-2067-m02" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:444: failed to start profile. args "out/minikube-darwin-amd64 start -p multinode-20211117120800-2067-m02 --driver=docker " : exit status 80
multinode_test.go:449: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20211117120800-2067
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20211117120800-2067: exit status 80 (367.428867ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20211117120800-2067-m02
multinode_test.go:454: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20211117120800-2067-m02: (10.021679204s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117120800-2067
helpers_test.go:235: (dbg) docker inspect multinode-20211117120800-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T20:08:06Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-20211117120800-2067"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/multinode-20211117120800-2067/_data",
	        "Name": "multinode-20211117120800-2067",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117120800-2067 -n multinode-20211117120800-2067: exit status 7 (142.404732ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:13:22.265960    8673 status.go:247] status error: host: state: unknown state "multinode-20211117120800-2067": docker container inspect multinode-20211117120800-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117120800-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117120800-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (102.13s)

                                                
                                    
x
+
TestPreload (48.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20211117121323-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
preload_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20211117121323-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 80 (44.792296769s)

                                                
                                                
-- stdout --
	* [test-preload-20211117121323-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node test-preload-20211117121323-2067 in cluster test-preload-20211117121323-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "test-preload-20211117121323-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:13:23.701075    8716 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:13:23.701220    8716 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:13:23.701225    8716 out.go:310] Setting ErrFile to fd 2...
	I1117 12:13:23.701228    8716 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:13:23.701304    8716 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:13:23.701607    8716 out.go:304] Setting JSON to false
	I1117 12:13:23.725262    8716 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":2578,"bootTime":1637177425,"procs":320,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:13:23.725353    8716 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:13:23.752628    8716 out.go:176] * [test-preload-20211117121323-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:13:23.752865    8716 notify.go:174] Checking for updates...
	I1117 12:13:23.800079    8716 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:13:23.826092    8716 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:13:23.852259    8716 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:13:23.877936    8716 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:13:23.879287    8716 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:13:23.879349    8716 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:13:23.971130    8716 docker.go:132] docker version: linux-20.10.5
	I1117 12:13:23.971276    8716 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:13:24.123075    8716 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:13:24.073377011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:13:24.151039    8716 out.go:176] * Using the docker driver based on user configuration
	I1117 12:13:24.151157    8716 start.go:280] selected driver: docker
	I1117 12:13:24.151167    8716 start.go:775] validating driver "docker" against <nil>
	I1117 12:13:24.151186    8716 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:13:24.154548    8716 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:13:24.323649    8716 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:13:24.257005941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:13:24.323753    8716 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:13:24.323884    8716 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:13:24.323904    8716 cni.go:93] Creating CNI manager for ""
	I1117 12:13:24.323912    8716 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:13:24.323928    8716 start_flags.go:282] config:
	{Name:test-preload-20211117121323-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20211117121323-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:13:24.372255    8716 out.go:176] * Starting control plane node test-preload-20211117121323-2067 in cluster test-preload-20211117121323-2067
	I1117 12:13:24.372364    8716 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:13:24.398123    8716 out.go:176] * Pulling base image ...
	I1117 12:13:24.398240    8716 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 12:13:24.398271    8716 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:13:24.398469    8716 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/test-preload-20211117121323-2067/config.json ...
	I1117 12:13:24.398557    8716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/test-preload-20211117121323-2067/config.json: {Name:mk4d749271b6e6ce4bb4634d912b2f86502294c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:13:24.398573    8716 cache.go:107] acquiring lock: {Name:mk484f4aa10be29d59ecef162cc3ba4ef356bc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.398574    8716 cache.go:107] acquiring lock: {Name:mk834eda680f82d430a085d35132590597788855 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.400713    8716 cache.go:107] acquiring lock: {Name:mk3dde20f0492b6c81623be65c32e22d2f7ef775 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.400872    8716 cache.go:107] acquiring lock: {Name:mka5b9b877ce8ec5abdbcf38309ed216afebcc1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.400803    8716 cache.go:107] acquiring lock: {Name:mk757cf2ea27b429afc8f936d2baa977656448fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.402266    8716 cache.go:107] acquiring lock: {Name:mkc38557d3f08ef749cdb79439f2e56bd72f6169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.402312    8716 cache.go:107] acquiring lock: {Name:mk220bfc44e45f2cff65b1b1d596095a26a21c35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.401193    8716 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I1117 12:13:24.402381    8716 cache.go:107] acquiring lock: {Name:mk8510e8d29ffb1d7afc63ac2448ba0a514946b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.402387    8716 cache.go:107] acquiring lock: {Name:mk8b303a5d15a81fc9edc8267d40dfa9f5a412b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.402392    8716 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 3.834513ms
	I1117 12:13:24.402276    8716 cache.go:107] acquiring lock: {Name:mk8d570a8fdac05efe0fb6079160413b41a63a13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.402432    8716 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I1117 12:13:24.402526    8716 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I1117 12:13:24.402571    8716 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I1117 12:13:24.402571    8716 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 1.443337ms
	I1117 12:13:24.402566    8716 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I1117 12:13:24.402570    8716 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I1117 12:13:24.402604    8716 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I1117 12:13:24.402576    8716 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1117 12:13:24.402637    8716 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.818727ms
	I1117 12:13:24.402644    8716 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I1117 12:13:24.402650    8716 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1117 12:13:24.402668    8716 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I1117 12:13:24.402698    8716 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I1117 12:13:24.402802    8716 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I1117 12:13:24.404128    8716 image.go:176] found k8s.gcr.io/kube-proxy:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.17.0 original:k8s.gcr.io/kube-proxy:v1.17.0} opener:0xc0000de0e0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:13:24.404163    8716 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0
	I1117 12:13:24.404689    8716 image.go:176] found k8s.gcr.io/etcd:3.4.3-0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:etcd} tag:3.4.3-0 original:k8s.gcr.io/etcd:3.4.3-0} opener:0xc0001aa230 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:13:24.404715    8716 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0
	I1117 12:13:24.404732    8716 image.go:176] found k8s.gcr.io/kube-scheduler:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.17.0 original:k8s.gcr.io/kube-scheduler:v1.17.0} opener:0xc0000de1c0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:13:24.404759    8716 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0
	I1117 12:13:24.405398    8716 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.17.0 original:k8s.gcr.io/kube-controller-manager:v1.17.0} opener:0xc000398000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:13:24.405419    8716 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0
	I1117 12:13:24.405575    8716 image.go:176] found k8s.gcr.io/kube-apiserver:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.17.0 original:k8s.gcr.io/kube-apiserver:v1.17.0} opener:0xc0000de310 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:13:24.405591    8716 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0
	I1117 12:13:24.406093    8716 image.go:176] found k8s.gcr.io/coredns:1.6.5 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:coredns} tag:1.6.5 original:k8s.gcr.io/coredns:1.6.5} opener:0xc0003981c0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:13:24.406107    8716 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5
	I1117 12:13:24.406332    8716 image.go:176] found k8s.gcr.io/pause:3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:pause} tag:3.1 original:k8s.gcr.io/pause:3.1} opener:0xc0000de3f0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:13:24.406345    8716 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I1117 12:13:24.407462    8716 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 8.858663ms
	I1117 12:13:24.407527    8716 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0" took 8.894757ms
	I1117 12:13:24.407689    8716 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0" took 6.780412ms
	I1117 12:13:24.408581    8716 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 7.902041ms
	I1117 12:13:24.408764    8716 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0" took 10.237781ms
	I1117 12:13:24.409388    8716 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 8.67511ms
	I1117 12:13:24.409788    8716 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0" took 11.201612ms
	I1117 12:13:24.515296    8716 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:13:24.515314    8716 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:13:24.515329    8716 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:13:24.515376    8716 start.go:313] acquiring machines lock for test-preload-20211117121323-2067: {Name:mk4a34cdaf7522bdc00b517ea41f1127cf5dea65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:24.515512    8716 start.go:317] acquired machines lock for "test-preload-20211117121323-2067" in 122.958µs
	I1117 12:13:24.515537    8716 start.go:89] Provisioning new machine with config: &{Name:test-preload-20211117121323-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20211117121323-2067 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}
	I1117 12:13:24.515597    8716 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:13:24.542820    8716 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:13:24.543152    8716 start.go:160] libmachine.API.Create for "test-preload-20211117121323-2067" (driver="docker")
	I1117 12:13:24.543199    8716 client.go:168] LocalClient.Create starting
	I1117 12:13:24.543366    8716 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:13:24.543440    8716 main.go:130] libmachine: Decoding PEM data...
	I1117 12:13:24.543474    8716 main.go:130] libmachine: Parsing certificate...
	I1117 12:13:24.543588    8716 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:13:24.543643    8716 main.go:130] libmachine: Decoding PEM data...
	I1117 12:13:24.543666    8716 main.go:130] libmachine: Parsing certificate...
	I1117 12:13:24.544636    8716 cli_runner.go:115] Run: docker network inspect test-preload-20211117121323-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:13:24.648001    8716 cli_runner.go:162] docker network inspect test-preload-20211117121323-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:13:24.648100    8716 network_create.go:254] running [docker network inspect test-preload-20211117121323-2067] to gather additional debugging logs...
	I1117 12:13:24.648114    8716 cli_runner.go:115] Run: docker network inspect test-preload-20211117121323-2067
	W1117 12:13:24.750359    8716 cli_runner.go:162] docker network inspect test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:13:24.750388    8716 network_create.go:257] error running [docker network inspect test-preload-20211117121323-2067]: docker network inspect test-preload-20211117121323-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20211117121323-2067
	I1117 12:13:24.750407    8716 network_create.go:259] output of [docker network inspect test-preload-20211117121323-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20211117121323-2067
	
	** /stderr **
	I1117 12:13:24.750499    8716 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:13:24.853830    8716 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00013e5b8] misses:0}
	I1117 12:13:24.853885    8716 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:13:24.853905    8716 network_create.go:106] attempt to create docker network test-preload-20211117121323-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:13:24.853978    8716 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117121323-2067
	I1117 12:13:28.652993    8716 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117121323-2067: (3.799014551s)
	I1117 12:13:28.653018    8716 network_create.go:90] docker network test-preload-20211117121323-2067 192.168.49.0/24 created
	I1117 12:13:28.653036    8716 kic.go:106] calculated static IP "192.168.49.2" for the "test-preload-20211117121323-2067" container
	I1117 12:13:28.653141    8716 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:13:28.773012    8716 cli_runner.go:115] Run: docker volume create test-preload-20211117121323-2067 --label name.minikube.sigs.k8s.io=test-preload-20211117121323-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:13:28.876185    8716 oci.go:102] Successfully created a docker volume test-preload-20211117121323-2067
	I1117 12:13:28.876304    8716 cli_runner.go:115] Run: docker run --rm --name test-preload-20211117121323-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20211117121323-2067 --entrypoint /usr/bin/test -v test-preload-20211117121323-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:13:29.381212    8716 oci.go:106] Successfully prepared a docker volume test-preload-20211117121323-2067
	E1117 12:13:29.381265    8716 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:13:29.381272    8716 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 12:13:29.381291    8716 client.go:171] LocalClient.Create took 4.838129792s
	I1117 12:13:31.388919    8716 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:13:31.389079    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:13:31.490972    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:13:31.491059    8716 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:31.777603    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:13:31.880526    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:13:31.880603    8716 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:32.427583    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:13:32.533382    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:13:32.533463    8716 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:33.194177    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:13:33.295966    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	W1117 12:13:33.296049    8716 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	
	W1117 12:13:33.296070    8716 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:33.296081    8716 start.go:129] duration metric: createHost completed in 8.780560183s
	I1117 12:13:33.296087    8716 start.go:80] releasing machines lock for "test-preload-20211117121323-2067", held for 8.780649986s
	W1117 12:13:33.296100    8716 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:13:33.296533    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:33.397163    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:33.397207    8716 delete.go:82] Unable to get host status for test-preload-20211117121323-2067, assuming it has already been deleted: state: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	W1117 12:13:33.397338    8716 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:13:33.397350    8716 start.go:547] Will try again in 5 seconds ...
	I1117 12:13:38.400952    8716 start.go:313] acquiring machines lock for test-preload-20211117121323-2067: {Name:mk4a34cdaf7522bdc00b517ea41f1127cf5dea65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:13:38.401111    8716 start.go:317] acquired machines lock for "test-preload-20211117121323-2067" in 127.357µs
	I1117 12:13:38.401154    8716 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:13:38.401166    8716 fix.go:55] fixHost starting: 
	I1117 12:13:38.401625    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:38.502214    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:38.502262    8716 fix.go:108] recreateIfNeeded on test-preload-20211117121323-2067: state= err=unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:38.502287    8716 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:13:38.529494    8716 out.go:176] * docker "test-preload-20211117121323-2067" container is missing, will recreate.
	I1117 12:13:38.529604    8716 delete.go:124] DEMOLISHING test-preload-20211117121323-2067 ...
	I1117 12:13:38.529833    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:38.629570    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:13:38.629616    8716 stop.go:75] unable to get state: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:38.629637    8716 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:38.630038    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:38.732970    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:38.733010    8716 delete.go:82] Unable to get host status for test-preload-20211117121323-2067, assuming it has already been deleted: state: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:38.733094    8716 cli_runner.go:115] Run: docker container inspect -f {{.Id}} test-preload-20211117121323-2067
	W1117 12:13:38.832940    8716 cli_runner.go:162] docker container inspect -f {{.Id}} test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:13:38.832965    8716 kic.go:360] could not find the container test-preload-20211117121323-2067 to remove it. will try anyways
	I1117 12:13:38.833044    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:38.933051    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:13:38.933096    8716 oci.go:83] error getting container status, will try to delete anyways: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:38.933211    8716 cli_runner.go:115] Run: docker exec --privileged -t test-preload-20211117121323-2067 /bin/bash -c "sudo init 0"
	W1117 12:13:39.036553    8716 cli_runner.go:162] docker exec --privileged -t test-preload-20211117121323-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:13:39.036579    8716 oci.go:656] error shutdown test-preload-20211117121323-2067: docker exec --privileged -t test-preload-20211117121323-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:40.043053    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:40.146471    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:40.146509    8716 oci.go:668] temporary error verifying shutdown: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:40.146555    8716 oci.go:670] temporary error: container test-preload-20211117121323-2067 status is  but expect it to be exited
	I1117 12:13:40.146576    8716 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:40.609192    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:40.709433    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:40.709475    8716 oci.go:668] temporary error verifying shutdown: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:40.709498    8716 oci.go:670] temporary error: container test-preload-20211117121323-2067 status is  but expect it to be exited
	I1117 12:13:40.709522    8716 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:41.603964    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:41.704294    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:41.704332    8716 oci.go:668] temporary error verifying shutdown: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:41.704342    8716 oci.go:670] temporary error: container test-preload-20211117121323-2067 status is  but expect it to be exited
	I1117 12:13:41.704364    8716 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:42.344736    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:42.448780    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:42.448822    8716 oci.go:668] temporary error verifying shutdown: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:42.448833    8716 oci.go:670] temporary error: container test-preload-20211117121323-2067 status is  but expect it to be exited
	I1117 12:13:42.448856    8716 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:43.560815    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:43.663151    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:43.663189    8716 oci.go:668] temporary error verifying shutdown: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:43.663199    8716 oci.go:670] temporary error: container test-preload-20211117121323-2067 status is  but expect it to be exited
	I1117 12:13:43.663219    8716 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:45.178142    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:45.281053    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:45.281092    8716 oci.go:668] temporary error verifying shutdown: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:45.281103    8716 oci.go:670] temporary error: container test-preload-20211117121323-2067 status is  but expect it to be exited
	I1117 12:13:45.281125    8716 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:48.331176    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:48.435420    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:48.435460    8716 oci.go:668] temporary error verifying shutdown: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:48.435470    8716 oci.go:670] temporary error: container test-preload-20211117121323-2067 status is  but expect it to be exited
	I1117 12:13:48.435491    8716 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:54.220270    8716 cli_runner.go:115] Run: docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}
	W1117 12:13:54.325768    8716 cli_runner.go:162] docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:13:54.325807    8716 oci.go:668] temporary error verifying shutdown: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:13:54.325816    8716 oci.go:670] temporary error: container test-preload-20211117121323-2067 status is  but expect it to be exited
	I1117 12:13:54.325838    8716 oci.go:87] couldn't shut down test-preload-20211117121323-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	 
	I1117 12:13:54.325921    8716 cli_runner.go:115] Run: docker rm -f -v test-preload-20211117121323-2067
	I1117 12:13:54.426089    8716 cli_runner.go:115] Run: docker container inspect -f {{.Id}} test-preload-20211117121323-2067
	W1117 12:13:54.525429    8716 cli_runner.go:162] docker container inspect -f {{.Id}} test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:13:54.525555    8716 cli_runner.go:115] Run: docker network inspect test-preload-20211117121323-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:13:54.625616    8716 cli_runner.go:115] Run: docker network rm test-preload-20211117121323-2067
	I1117 12:13:57.394211    8716 cli_runner.go:168] Completed: docker network rm test-preload-20211117121323-2067: (2.768563523s)
	W1117 12:13:57.394513    8716 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:13:57.394519    8716 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:13:58.394724    8716 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:13:58.443880    8716 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:13:58.444036    8716 start.go:160] libmachine.API.Create for "test-preload-20211117121323-2067" (driver="docker")
	I1117 12:13:58.444067    8716 client.go:168] LocalClient.Create starting
	I1117 12:13:58.444274    8716 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:13:58.444374    8716 main.go:130] libmachine: Decoding PEM data...
	I1117 12:13:58.444401    8716 main.go:130] libmachine: Parsing certificate...
	I1117 12:13:58.444511    8716 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:13:58.444566    8716 main.go:130] libmachine: Decoding PEM data...
	I1117 12:13:58.444586    8716 main.go:130] libmachine: Parsing certificate...
	I1117 12:13:58.445410    8716 cli_runner.go:115] Run: docker network inspect test-preload-20211117121323-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:13:58.549670    8716 cli_runner.go:162] docker network inspect test-preload-20211117121323-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:13:58.549783    8716 network_create.go:254] running [docker network inspect test-preload-20211117121323-2067] to gather additional debugging logs...
	I1117 12:13:58.549800    8716 cli_runner.go:115] Run: docker network inspect test-preload-20211117121323-2067
	W1117 12:13:58.649543    8716 cli_runner.go:162] docker network inspect test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:13:58.649569    8716 network_create.go:257] error running [docker network inspect test-preload-20211117121323-2067]: docker network inspect test-preload-20211117121323-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20211117121323-2067
	I1117 12:13:58.649581    8716 network_create.go:259] output of [docker network inspect test-preload-20211117121323-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20211117121323-2067
	
	** /stderr **
	I1117 12:13:58.649677    8716 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:13:58.769693    8716 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00013e5b8] amended:false}} dirty:map[] misses:0}
	I1117 12:13:58.769726    8716 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:13:58.769899    8716 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00013e5b8] amended:true}} dirty:map[192.168.49.0:0xc00013e5b8 192.168.58.0:0xc00065c2a8] misses:0}
	I1117 12:13:58.769911    8716 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:13:58.769920    8716 network_create.go:106] attempt to create docker network test-preload-20211117121323-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:13:58.770012    8716 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117121323-2067
	I1117 12:14:02.705072    8716 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117121323-2067: (3.93505231s)
	I1117 12:14:02.705097    8716 network_create.go:90] docker network test-preload-20211117121323-2067 192.168.58.0/24 created
	I1117 12:14:02.705112    8716 kic.go:106] calculated static IP "192.168.58.2" for the "test-preload-20211117121323-2067" container
	I1117 12:14:02.705374    8716 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:14:02.806652    8716 cli_runner.go:115] Run: docker volume create test-preload-20211117121323-2067 --label name.minikube.sigs.k8s.io=test-preload-20211117121323-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:14:02.906310    8716 oci.go:102] Successfully created a docker volume test-preload-20211117121323-2067
	I1117 12:14:02.906449    8716 cli_runner.go:115] Run: docker run --rm --name test-preload-20211117121323-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20211117121323-2067 --entrypoint /usr/bin/test -v test-preload-20211117121323-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:14:03.316838    8716 oci.go:106] Successfully prepared a docker volume test-preload-20211117121323-2067
	E1117 12:14:03.316887    8716 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:14:03.316897    8716 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 12:14:03.316898    8716 client.go:171] LocalClient.Create took 4.872866566s
	I1117 12:14:05.327249    8716 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:14:05.327442    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:14:05.430951    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:14:05.431046    8716 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:14:05.610263    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:14:05.712897    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:14:05.713030    8716 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:14:06.043862    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:14:06.149342    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:14:06.149441    8716 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:14:06.615620    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:14:06.717087    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	W1117 12:14:06.717199    8716 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	
	W1117 12:14:06.717218    8716 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:14:06.717230    8716 start.go:129] duration metric: createHost completed in 8.322498842s
	I1117 12:14:06.717295    8716 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:14:06.717354    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:14:06.817006    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:14:06.817108    8716 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:14:07.013097    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:14:07.113986    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:14:07.114081    8716 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:14:07.421841    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:14:07.527825    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	I1117 12:14:07.527913    8716 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:14:08.196590    8716 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067
	W1117 12:14:08.299304    8716 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067 returned with exit code 1
	W1117 12:14:08.299418    8716 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	
	W1117 12:14:08.299441    8716 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117121323-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117121323-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067
	I1117 12:14:08.299451    8716 fix.go:57] fixHost completed within 29.898562879s
	I1117 12:14:08.299462    8716 start.go:80] releasing machines lock for "test-preload-20211117121323-2067", held for 29.898615485s
	W1117 12:14:08.299598    8716 out.go:241] * Failed to start docker container. Running "minikube delete -p test-preload-20211117121323-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p test-preload-20211117121323-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:14:08.348116    8716 out.go:176] 
	W1117 12:14:08.348307    8716 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:14:08.348325    8716 out.go:241] * 
	* 
	W1117 12:14:08.349373    8716 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:14:08.432108    8716 out.go:176] 

                                                
                                                
** /stderr **
preload_test.go:51: out/minikube-darwin-amd64 start -p test-preload-20211117121323-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 80
panic.go:642: *** TestPreload FAILED at 2021-11-17 12:14:08.462587 -0800 PST m=+1444.211592199
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20211117121323-2067
helpers_test.go:235: (dbg) docker inspect test-preload-20211117121323-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "test-preload-20211117121323-2067",
	        "Id": "f09955e7ff3be6bf4fab93506446b017b92b83561f7f851ae7c5d330bf01a5ae",
	        "Created": "2021-11-17T20:13:58.873257866Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20211117121323-2067 -n test-preload-20211117121323-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20211117121323-2067 -n test-preload-20211117121323-2067: exit status 7 (141.808367ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:14:08.710109    8927 status.go:247] status error: host: state: unknown state "test-preload-20211117121323-2067": docker container inspect test-preload-20211117121323-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117121323-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-20211117121323-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "test-preload-20211117121323-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20211117121323-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20211117121323-2067: (3.855527927s)
--- FAIL: TestPreload (48.91s)

                                                
                                    
x
+
TestScheduledStopUnix (50.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20211117121412-2067 --memory=2048 --driver=docker 
scheduled_stop_test.go:129: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-20211117121412-2067 --memory=2048 --driver=docker : exit status 80 (45.327504567s)

                                                
                                                
-- stdout --
	* [scheduled-stop-20211117121412-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node scheduled-stop-20211117121412-2067 in cluster scheduled-stop-20211117121412-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20211117121412-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:14:18.357473    8969 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:14:52.545470    8969 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20211117121412-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:131: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-20211117121412-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node scheduled-stop-20211117121412-2067 in cluster scheduled-stop-20211117121412-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20211117121412-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:14:18.357473    8969 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:14:52.545470    8969 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20211117121412-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:642: *** TestScheduledStopUnix FAILED at 2021-11-17 12:14:57.894526 -0800 PST m=+1493.643988678
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-20211117121412-2067
helpers_test.go:235: (dbg) docker inspect scheduled-stop-20211117121412-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-20211117121412-2067",
	        "Id": "e3d60c7b83235342a7ad984450dc9d9ce850802a873653ef30e7f70cc5c5df22",
	        "Created": "2021-11-17T20:14:47.971609862Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211117121412-2067 -n scheduled-stop-20211117121412-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211117121412-2067 -n scheduled-stop-20211117121412-2067: exit status 7 (158.557646ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:14:58.208703    9198 status.go:247] status error: host: state: unknown state "scheduled-stop-20211117121412-2067": docker container inspect scheduled-stop-20211117121412-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: scheduled-stop-20211117121412-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-20211117121412-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-20211117121412-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20211117121412-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20211117121412-2067: (4.371538485s)
--- FAIL: TestScheduledStopUnix (50.01s)

                                                
                                    
x
+
TestSkaffold (51.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe430205667 version
skaffold_test.go:61: skaffold version: v1.35.0
skaffold_test.go:64: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20211117121502-2067 --memory=2600 --driver=docker 
skaffold_test.go:64: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-20211117121502-2067 --memory=2600 --driver=docker : exit status 80 (45.394317259s)

                                                
                                                
-- stdout --
	* [skaffold-20211117121502-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node skaffold-20211117121502-2067 in cluster skaffold-20211117121502-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* docker "skaffold-20211117121502-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:15:10.053595    9242 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:15:44.192005    9242 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p skaffold-20211117121502-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:66: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-20211117121502-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node skaffold-20211117121502-2067 in cluster skaffold-20211117121502-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* docker "skaffold-20211117121502-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:15:10.053595    9242 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:15:44.192005    9242 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p skaffold-20211117121502-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:642: *** TestSkaffold FAILED at 2021-11-17 12:15:49.689825 -0800 PST m=+1545.439766614
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-20211117121502-2067
helpers_test.go:235: (dbg) docker inspect skaffold-20211117121502-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-20211117121502-2067",
	        "Id": "575f6e24b317c9480fb6bd6f7626b942f46d3108b4df8addf6029c36037e2ccc",
	        "Created": "2021-11-17T20:15:39.670822677Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-20211117121502-2067 -n skaffold-20211117121502-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-20211117121502-2067 -n skaffold-20211117121502-2067: exit status 7 (151.994216ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:15:49.950508    9472 status.go:247] status error: host: state: unknown state "skaffold-20211117121502-2067": docker container inspect skaffold-20211117121502-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: skaffold-20211117121502-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-20211117121502-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-20211117121502-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20211117121502-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20211117121502-2067: (4.201487718s)
--- FAIL: TestSkaffold (51.58s)

                                                
                                    
x
+
TestInsufficientStorage (13.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20211117121554-2067 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20211117121554-2067 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.733595069s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"faf70bed-0430-4a8c-8fbb-b940609b89af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20211117121554-2067] minikube v1.24.0 on Darwin 11.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd14c41d-bd45-463d-be75-1329fe44cd58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"75db9dfd-80e9-44c7-9890-20e53aa21416","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig"}}
	{"specversion":"1.0","id":"d0f392bd-020b-455d-8546-f486392610e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"d4077dbf-0d04-402f-8c77-9b969c437e89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube"}}
	{"specversion":"1.0","id":"2deb3854-99cd-4ee5-943c-17d05daed0df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"81592559-dba5-4e00-8439-1817b163ffaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e06562d-d82b-4f2b-8993-abcf2ba886e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20211117121554-2067 in cluster insufficient-storage-20211117121554-2067","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a69d690a-b5f9-4dd6-93f3-0a596da8afaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"12c30b8d-cd59-464f-bdfb-d99fde3d8dd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"457a43e7-b45c-4e88-93dc-5d1606d0d384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:15:59.861879    9524 oci.go:173] error getting kernel modules path: Unable to locate kernel modules

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20211117121554-2067 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20211117121554-2067 --output=json --layout=cluster: exit status 7 (175.96594ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211117121554-2067","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"insufficient-storage-20211117121554-2067","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:16:02.086126    9586 status.go:258] status error: host: state: unknown state "insufficient-storage-20211117121554-2067": docker container inspect insufficient-storage-20211117121554-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: insufficient-storage-20211117121554-2067
	E1117 12:16:02.086141    9586 status.go:261] The "insufficient-storage-20211117121554-2067" host does not exist!

                                                
                                                
** /stderr **
status_test.go:99: incorrect node status code: 507
helpers_test.go:175: Cleaning up "insufficient-storage-20211117121554-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20211117121554-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20211117121554-2067: (5.129857715s)
--- FAIL: TestInsufficientStorage (13.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (66.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117121650-2067 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117121650-2067 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker : exit status 80 (46.915710081s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20211117121650-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kubernetes-upgrade-20211117121650-2067 in cluster kubernetes-upgrade-20211117121650-2067
	* Pulling base image ...
	* Downloading Kubernetes v1.14.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20211117121650-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:16:50.968193   10064 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:16:50.968330   10064 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:16:50.968335   10064 out.go:310] Setting ErrFile to fd 2...
	I1117 12:16:50.968339   10064 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:16:50.968415   10064 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:16:50.968722   10064 out.go:304] Setting JSON to false
	I1117 12:16:50.994404   10064 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":2785,"bootTime":1637177425,"procs":323,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:16:50.994518   10064 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:16:51.020621   10064 out.go:176] * [kubernetes-upgrade-20211117121650-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:16:51.020715   10064 notify.go:174] Checking for updates...
	I1117 12:16:51.067288   10064 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:16:51.093500   10064 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:16:51.119479   10064 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:16:51.145276   10064 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:16:51.145908   10064 config.go:176] Loaded profile config "missing-upgrade-20211117121608-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 12:16:51.145990   10064 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:16:51.146024   10064 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:16:51.234478   10064 docker.go:132] docker version: linux-20.10.5
	I1117 12:16:51.234662   10064 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:16:51.385853   10064 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:56 SystemTime:2021-11-17 20:16:51.333751635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:16:51.412742   10064 out.go:176] * Using the docker driver based on user configuration
	I1117 12:16:51.412767   10064 start.go:280] selected driver: docker
	I1117 12:16:51.412774   10064 start.go:775] validating driver "docker" against <nil>
	I1117 12:16:51.412784   10064 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:16:51.415171   10064 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:16:51.565382   10064 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:56 SystemTime:2021-11-17 20:16:51.514771047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:16:51.565522   10064 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:16:51.565642   10064 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 12:16:51.565657   10064 cni.go:93] Creating CNI manager for ""
	I1117 12:16:51.565663   10064 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:16:51.565674   10064 start_flags.go:282] config:
	{Name:kubernetes-upgrade-20211117121650-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20211117121650-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:16:51.614026   10064 out.go:176] * Starting control plane node kubernetes-upgrade-20211117121650-2067 in cluster kubernetes-upgrade-20211117121650-2067
	I1117 12:16:51.614100   10064 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:16:51.640143   10064 out.go:176] * Pulling base image ...
	I1117 12:16:51.640175   10064 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:16:51.640232   10064 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:16:51.715559   10064 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 12:16:51.715588   10064 cache.go:57] Caching tarball of preloaded images
	I1117 12:16:51.715772   10064 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:16:51.742399   10064 out.go:176] * Downloading Kubernetes v1.14.0 preload ...
	I1117 12:16:51.742420   10064 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 12:16:51.785323   10064 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:16:51.785337   10064 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:16:51.838373   10064 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4?checksum=md5:ec855295d74f2fe00733f44cbe6bc00d -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 12:16:54.486679   10064 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 12:16:54.486837   10064 preload.go:255] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 12:16:55.220021   10064 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I1117 12:16:55.220107   10064 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/kubernetes-upgrade-20211117121650-2067/config.json ...
	I1117 12:16:55.220137   10064 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/kubernetes-upgrade-20211117121650-2067/config.json: {Name:mkba537b234f6eef522cb14c1b8eb6e78c1561b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:16:55.220397   10064 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:16:55.220430   10064 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117121650-2067: {Name:mkf665c4f19f40278f471c3b148bcac53b848672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:16:55.220515   10064 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117121650-2067" in 75.457µs
	I1117 12:16:55.220538   10064 start.go:89] Provisioning new machine with config: &{Name:kubernetes-upgrade-20211117121650-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20211117121650-2067 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I1117 12:16:55.220575   10064 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:16:55.246983   10064 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:16:55.247247   10064 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20211117121650-2067" (driver="docker")
	I1117 12:16:55.247285   10064 client.go:168] LocalClient.Create starting
	I1117 12:16:55.247443   10064 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:16:55.247512   10064 main.go:130] libmachine: Decoding PEM data...
	I1117 12:16:55.247551   10064 main.go:130] libmachine: Parsing certificate...
	I1117 12:16:55.247648   10064 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:16:55.247704   10064 main.go:130] libmachine: Decoding PEM data...
	I1117 12:16:55.247721   10064 main.go:130] libmachine: Parsing certificate...
	I1117 12:16:55.248466   10064 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117121650-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:16:55.352303   10064 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117121650-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:16:55.352437   10064 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117121650-2067] to gather additional debugging logs...
	I1117 12:16:55.352466   10064 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117121650-2067
	W1117 12:16:55.452651   10064 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:16:55.452684   10064 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117121650-2067]: docker network inspect kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20211117121650-2067
	I1117 12:16:55.452709   10064 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117121650-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20211117121650-2067
	
	** /stderr **
	I1117 12:16:55.452811   10064 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:16:55.554584   10064 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00053c578] misses:0}
	I1117 12:16:55.554619   10064 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:16:55.554633   10064 network_create.go:106] attempt to create docker network kubernetes-upgrade-20211117121650-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:16:55.554713   10064 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117121650-2067
	I1117 12:16:56.650574   10064 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117121650-2067: (1.095804743s)
	I1117 12:16:56.650599   10064 network_create.go:90] docker network kubernetes-upgrade-20211117121650-2067 192.168.49.0/24 created
	I1117 12:16:56.650618   10064 kic.go:106] calculated static IP "192.168.49.2" for the "kubernetes-upgrade-20211117121650-2067" container
	I1117 12:16:56.650747   10064 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:16:56.752652   10064 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20211117121650-2067 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117121650-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:16:56.856790   10064 oci.go:102] Successfully created a docker volume kubernetes-upgrade-20211117121650-2067
	I1117 12:16:56.856957   10064 cli_runner.go:115] Run: docker run --rm --name kubernetes-upgrade-20211117121650-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117121650-2067 --entrypoint /usr/bin/test -v kubernetes-upgrade-20211117121650-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:16:57.336205   10064 oci.go:106] Successfully prepared a docker volume kubernetes-upgrade-20211117121650-2067
	E1117 12:16:57.336264   10064 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:16:57.336277   10064 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:16:57.336292   10064 client.go:171] LocalClient.Create took 2.088969895s
	I1117 12:16:57.336312   10064 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:16:57.336456   10064 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117121650-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:16:59.341653   10064 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:16:59.341753   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:16:59.473714   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:16:59.473851   10064 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:16:59.758116   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:16:59.887315   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:16:59.887412   10064 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:00.434000   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:00.564997   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:00.565101   10064 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:01.220567   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:01.347437   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	W1117 12:17:01.347533   10064 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	
	W1117 12:17:01.347557   10064 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:01.347567   10064 start.go:129] duration metric: createHost completed in 6.126917664s
	I1117 12:17:01.347573   10064 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117121650-2067", held for 6.126981243s
	W1117 12:17:01.347588   10064 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:17:01.348132   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:01.475816   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:01.475896   10064 delete.go:82] Unable to get host status for kubernetes-upgrade-20211117121650-2067, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	W1117 12:17:01.476051   10064 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:17:01.476066   10064 start.go:547] Will try again in 5 seconds ...
	I1117 12:17:03.682521   10064 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117121650-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.345965733s)
	I1117 12:17:03.682539   10064 kic.go:188] duration metric: took 6.346172 seconds to extract preloaded images to volume
	I1117 12:17:06.483119   10064 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117121650-2067: {Name:mkf665c4f19f40278f471c3b148bcac53b848672 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:06.483222   10064 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117121650-2067" in 82.761µs
	I1117 12:17:06.483246   10064 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:17:06.483253   10064 fix.go:55] fixHost starting: 
	I1117 12:17:06.483501   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:06.594494   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:06.594726   10064 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20211117121650-2067: state= err=unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:06.594742   10064 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:17:06.621706   10064 out.go:176] * docker "kubernetes-upgrade-20211117121650-2067" container is missing, will recreate.
	I1117 12:17:06.621720   10064 delete.go:124] DEMOLISHING kubernetes-upgrade-20211117121650-2067 ...
	I1117 12:17:06.621952   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:06.732385   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:17:06.732432   10064 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:06.732445   10064 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:06.732914   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:06.841226   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:06.841283   10064 delete.go:82] Unable to get host status for kubernetes-upgrade-20211117121650-2067, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:06.841374   10064 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117121650-2067
	W1117 12:17:06.954464   10064 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:06.954493   10064 kic.go:360] could not find the container kubernetes-upgrade-20211117121650-2067 to remove it. will try anyways
	I1117 12:17:06.954589   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:07.064759   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:17:07.064812   10064 oci.go:83] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:07.064928   10064 cli_runner.go:115] Run: docker exec --privileged -t kubernetes-upgrade-20211117121650-2067 /bin/bash -c "sudo init 0"
	W1117 12:17:07.180837   10064 cli_runner.go:162] docker exec --privileged -t kubernetes-upgrade-20211117121650-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:17:07.180863   10064 oci.go:656] error shutdown kubernetes-upgrade-20211117121650-2067: docker exec --privileged -t kubernetes-upgrade-20211117121650-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:08.183171   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:08.291946   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:08.291991   10064 oci.go:668] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:08.292000   10064 oci.go:670] temporary error: container kubernetes-upgrade-20211117121650-2067 status is  but expect it to be exited
	I1117 12:17:08.292024   10064 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:08.758790   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:08.863042   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:08.863084   10064 oci.go:668] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:08.863090   10064 oci.go:670] temporary error: container kubernetes-upgrade-20211117121650-2067 status is  but expect it to be exited
	I1117 12:17:08.863119   10064 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:09.758355   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:09.862029   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:09.862074   10064 oci.go:668] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:09.862087   10064 oci.go:670] temporary error: container kubernetes-upgrade-20211117121650-2067 status is  but expect it to be exited
	I1117 12:17:09.862109   10064 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:10.508359   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:10.608866   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:10.608908   10064 oci.go:668] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:10.608915   10064 oci.go:670] temporary error: container kubernetes-upgrade-20211117121650-2067 status is  but expect it to be exited
	I1117 12:17:10.608939   10064 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:11.719329   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:11.820204   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:11.820253   10064 oci.go:668] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:11.820260   10064 oci.go:670] temporary error: container kubernetes-upgrade-20211117121650-2067 status is  but expect it to be exited
	I1117 12:17:11.820283   10064 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:13.333853   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:13.439347   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:13.439389   10064 oci.go:668] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:13.439397   10064 oci.go:670] temporary error: container kubernetes-upgrade-20211117121650-2067 status is  but expect it to be exited
	I1117 12:17:13.439422   10064 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:16.483197   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:16.582115   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:16.582156   10064 oci.go:668] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:16.582171   10064 oci.go:670] temporary error: container kubernetes-upgrade-20211117121650-2067 status is  but expect it to be exited
	I1117 12:17:16.582193   10064 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:22.366893   10064 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}
	W1117 12:17:22.466771   10064 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:22.466812   10064 oci.go:668] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:22.466818   10064 oci.go:670] temporary error: container kubernetes-upgrade-20211117121650-2067 status is  but expect it to be exited
	I1117 12:17:22.466843   10064 oci.go:87] couldn't shut down kubernetes-upgrade-20211117121650-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	 
	I1117 12:17:22.466936   10064 cli_runner.go:115] Run: docker rm -f -v kubernetes-upgrade-20211117121650-2067
	I1117 12:17:22.569321   10064 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117121650-2067
	W1117 12:17:22.669181   10064 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:22.669298   10064 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117121650-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:17:22.769365   10064 cli_runner.go:115] Run: docker network rm kubernetes-upgrade-20211117121650-2067
	I1117 12:17:24.633966   10064 cli_runner.go:168] Completed: docker network rm kubernetes-upgrade-20211117121650-2067: (1.864540639s)
	W1117 12:17:24.634312   10064 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:17:24.634319   10064 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:17:25.641538   10064 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:17:25.686219   10064 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:17:25.686385   10064 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20211117121650-2067" (driver="docker")
	I1117 12:17:25.686422   10064 client.go:168] LocalClient.Create starting
	I1117 12:17:25.686655   10064 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:17:25.686878   10064 main.go:130] libmachine: Decoding PEM data...
	I1117 12:17:25.686943   10064 main.go:130] libmachine: Parsing certificate...
	I1117 12:17:25.687104   10064 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:17:25.708076   10064 main.go:130] libmachine: Decoding PEM data...
	I1117 12:17:25.708106   10064 main.go:130] libmachine: Parsing certificate...
	I1117 12:17:25.708616   10064 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117121650-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:17:25.824715   10064 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117121650-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:17:25.824850   10064 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117121650-2067] to gather additional debugging logs...
	I1117 12:17:25.824871   10064 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117121650-2067
	W1117 12:17:25.940356   10064 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:25.965210   10064 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117121650-2067]: docker network inspect kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:25.965229   10064 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117121650-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20211117121650-2067
	
	** /stderr **
	I1117 12:17:25.965376   10064 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:17:26.088106   10064 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c578] amended:false}} dirty:map[] misses:0}
	I1117 12:17:26.088145   10064 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:17:26.088317   10064 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00053c578] amended:true}} dirty:map[192.168.49.0:0xc00053c578 192.168.58.0:0xc0007a6ab8] misses:0}
	I1117 12:17:26.088333   10064 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:17:26.088340   10064 network_create.go:106] attempt to create docker network kubernetes-upgrade-20211117121650-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:17:26.088420   10064 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117121650-2067
	I1117 12:17:31.821432   10064 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117121650-2067: (5.732977535s)
	I1117 12:17:31.821461   10064 network_create.go:90] docker network kubernetes-upgrade-20211117121650-2067 192.168.58.0/24 created
	I1117 12:17:31.821487   10064 kic.go:106] calculated static IP "192.168.58.2" for the "kubernetes-upgrade-20211117121650-2067" container
	I1117 12:17:31.821610   10064 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:17:31.923134   10064 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20211117121650-2067 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117121650-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:17:32.025128   10064 oci.go:102] Successfully created a docker volume kubernetes-upgrade-20211117121650-2067
	I1117 12:17:32.025288   10064 cli_runner.go:115] Run: docker run --rm --name kubernetes-upgrade-20211117121650-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117121650-2067 --entrypoint /usr/bin/test -v kubernetes-upgrade-20211117121650-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:17:32.426744   10064 oci.go:106] Successfully prepared a docker volume kubernetes-upgrade-20211117121650-2067
	E1117 12:17:32.426792   10064 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:17:32.426807   10064 client.go:171] LocalClient.Create took 6.740404207s
	I1117 12:17:32.426829   10064 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:17:32.426847   10064 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:17:32.426960   10064 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20211117121650-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:17:34.433159   10064 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:17:34.433264   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:34.565173   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:34.565247   10064 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:34.744534   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:34.874398   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:34.874507   10064 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:35.208100   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:35.335346   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:35.335470   10064 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:35.795800   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:35.922928   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	W1117 12:17:35.923037   10064 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	
	W1117 12:17:35.923060   10064 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:35.923073   10064 start.go:129] duration metric: createHost completed in 10.281559083s
	I1117 12:17:35.923147   10064 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:17:35.923228   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:36.058622   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:36.058773   10064 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:36.258102   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:36.380683   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:36.380764   10064 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:36.683174   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:36.804879   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	I1117 12:17:36.804968   10064 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:37.472993   10064 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067
	W1117 12:17:37.580626   10064 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067 returned with exit code 1
	W1117 12:17:37.580710   10064 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	
	W1117 12:17:37.580731   10064 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117121650-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117121650-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	I1117 12:17:37.580740   10064 fix.go:57] fixHost completed within 31.097541653s
	I1117 12:17:37.580747   10064 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117121650-2067", held for 31.097572314s
	W1117 12:17:37.580893   10064 out.go:241] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117121650-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117121650-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:17:37.703588   10064 out.go:176] 
	W1117 12:17:37.703796   10064 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:17:37.703816   10064 out.go:241] * 
	* 
	W1117 12:17:37.704878   10064 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:17:37.824471   10064 out.go:176] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117121650-2067 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker : exit status 80
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211117121650-2067

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211117121650-2067: exit status 82 (14.84897344s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20211117121650-2067"  ...
	* Stopping node "kubernetes-upgrade-20211117121650-2067"  ...
	* Stopping node "kubernetes-upgrade-20211117121650-2067"  ...
	* Stopping node "kubernetes-upgrade-20211117121650-2067"  ...
	* Stopping node "kubernetes-upgrade-20211117121650-2067"  ...
	* Stopping node "kubernetes-upgrade-20211117121650-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-20211117121650-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211117121650-2067 failed: exit status 82
panic.go:642: *** TestKubernetesUpgrade FAILED at 2021-11-17 12:17:52.701304 -0800 PST m=+1668.430840596
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20211117121650-2067
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20211117121650-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "kubernetes-upgrade-20211117121650-2067",
	        "Id": "635afa6cf0d6c7b2c3712da120da0ebfca3db828de21dfd71a3fc237f5a1bf4f",
	        "Created": "2021-11-17T20:17:26.207409315Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20211117121650-2067 -n kubernetes-upgrade-20211117121650-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20211117121650-2067 -n kubernetes-upgrade-20211117121650-2067: exit status 7 (142.893442ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:17:52.949725   10610 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20211117121650-2067": docker container inspect kubernetes-upgrade-20211117121650-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-20211117121650-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20211117121650-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20211117121650-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20211117121650-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20211117121650-2067: (4.703637255s)
--- FAIL: TestKubernetesUpgrade (66.73s)

                                                
                                    
x
+
TestMissingContainerUpgrade (165.75s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1407087579.exe start -p missing-upgrade-20211117121608-2067 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.1407087579.exe start -p missing-upgrade-20211117121608-2067 --memory=2200 --driver=docker : (1m18.563607653s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20211117121608-2067

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20211117121608-2067: (15.263960607s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20211117121608-2067
version_upgrade_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20211117121608-2067 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p missing-upgrade-20211117121608-2067 --memory=2200 --alsologtostderr -v=1 --driver=docker : exit status 80 (1m5.736794752s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-20211117121608-2067 in cluster missing-upgrade-20211117121608-2067
	* Pulling base image ...
	* docker "missing-upgrade-20211117121608-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "missing-upgrade-20211117121608-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:17:42.416242   10519 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:17:42.416432   10519 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:17:42.416437   10519 out.go:310] Setting ErrFile to fd 2...
	I1117 12:17:42.416440   10519 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:17:42.416507   10519 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:17:42.416751   10519 out.go:304] Setting JSON to false
	I1117 12:17:42.441716   10519 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":2837,"bootTime":1637177425,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:17:42.441811   10519 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:17:42.469625   10519 out.go:176] * [missing-upgrade-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:17:42.469817   10519 notify.go:174] Checking for updates...
	I1117 12:17:42.517572   10519 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:17:42.543560   10519 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:17:42.569796   10519 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:17:42.595529   10519 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:17:42.595889   10519 config.go:176] Loaded profile config "missing-upgrade-20211117121608-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 12:17:42.595907   10519 start_flags.go:571] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c
	I1117 12:17:42.622305   10519 out.go:176] * Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
	I1117 12:17:42.622340   10519 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:17:42.712555   10519 docker.go:132] docker version: linux-20.10.5
	I1117 12:17:42.712675   10519 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:17:42.865901   10519 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:17:42.831172592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:17:42.914727   10519 out.go:176] * Using the docker driver based on existing profile
	I1117 12:17:42.914765   10519 start.go:280] selected driver: docker
	I1117 12:17:42.914775   10519 start.go:775] validating driver "docker" against &{Name:missing-upgrade-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20211117121608-2067 Namespace: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
	I1117 12:17:42.914869   10519 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:17:42.918150   10519 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:17:43.071473   10519 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:17:43.037450279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:17:43.071601   10519 cni.go:93] Creating CNI manager for ""
	I1117 12:17:43.071615   10519 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:17:43.071620   10519 start_flags.go:282] config:
	{Name:missing-upgrade-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-20211117121608-2067 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:}
	I1117 12:17:43.098676   10519 out.go:176] * Starting control plane node missing-upgrade-20211117121608-2067 in cluster missing-upgrade-20211117121608-2067
	I1117 12:17:43.098738   10519 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:17:43.147372   10519 out.go:176] * Pulling base image ...
	I1117 12:17:43.147496   10519 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:17:43.147505   10519 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	W1117 12:17:43.248321   10519 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1117 12:17:43.248455   10519 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/missing-upgrade-20211117121608-2067/config.json ...
	I1117 12:17:43.248520   10519 cache.go:107] acquiring lock: {Name:mk52cdc7954f3158f9c1882268c4a1621a72a597 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248520   10519 cache.go:107] acquiring lock: {Name:mk484f4aa10be29d59ecef162cc3ba4ef356bc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248543   10519 cache.go:107] acquiring lock: {Name:mk5b543ac5480f8010c8e84dc625bc345f038729 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248630   10519 cache.go:107] acquiring lock: {Name:mk4809a3d363be2eafc76f6988dc607e496a738d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248689   10519 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I1117 12:17:43.248699   10519 cache.go:107] acquiring lock: {Name:mke297f2c7d5943a83a860d0fc42387307acdd2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248708   10519 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 192.729µs
	I1117 12:17:43.248727   10519 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I1117 12:17:43.248725   10519 cache.go:107] acquiring lock: {Name:mk23cf628b7e6df4dd082e2499c067d5939ca5f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248727   10519 cache.go:107] acquiring lock: {Name:mka5a2eefb82331db57182bdbd528149974d70a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248766   10519 cache.go:107] acquiring lock: {Name:mkc38557d3f08ef749cdb79439f2e56bd72f6169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248774   10519 cache.go:107] acquiring lock: {Name:mk8510e8d29ffb1d7afc63ac2448ba0a514946b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248786   10519 cache.go:107] acquiring lock: {Name:mk757cf2ea27b429afc8f936d2baa977656448fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.248853   10519 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I1117 12:17:43.248904   10519 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I1117 12:17:43.248929   10519 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 209.764µs
	I1117 12:17:43.248948   10519 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I1117 12:17:43.248965   10519 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I1117 12:17:43.248953   10519 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1117 12:17:43.248963   10519 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I1117 12:17:43.248984   10519 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I1117 12:17:43.248988   10519 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I1117 12:17:43.248990   10519 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 281.174µs
	I1117 12:17:43.249010   10519 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1117 12:17:43.249076   10519 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.7
	I1117 12:17:43.249142   10519 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
	I1117 12:17:43.250264   10519 image.go:176] found k8s.gcr.io/kube-apiserver:v1.18.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.18.0 original:k8s.gcr.io/kube-apiserver:v1.18.0} opener:0xc000410230 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:17:43.250297   10519 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0
	I1117 12:17:43.250526   10519 image.go:176] found k8s.gcr.io/kube-proxy:v1.18.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.18.0 original:k8s.gcr.io/kube-proxy:v1.18.0} opener:0xc000b56000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:17:43.250556   10519 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0
	I1117 12:17:43.250885   10519 image.go:176] found k8s.gcr.io/etcd:3.4.3-0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:etcd} tag:3.4.3-0 original:k8s.gcr.io/etcd:3.4.3-0} opener:0xc000410380 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:17:43.250898   10519 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0
	I1117 12:17:43.251038   10519 image.go:176] found k8s.gcr.io/pause:3.2 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:pause} tag:3.2 original:k8s.gcr.io/pause:3.2} opener:0xc000b560e0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:17:43.251074   10519 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I1117 12:17:43.251660   10519 image.go:176] found k8s.gcr.io/kube-scheduler:v1.18.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.18.0 original:k8s.gcr.io/kube-scheduler:v1.18.0} opener:0xc000410540 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:17:43.251672   10519 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0
	I1117 12:17:43.251938   10519 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.18.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.18.0 original:k8s.gcr.io/kube-controller-manager:v1.18.0} opener:0xc000b561c0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:17:43.251964   10519 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0
	I1117 12:17:43.252739   10519 image.go:176] found k8s.gcr.io/coredns:1.6.7 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:coredns} tag:1.6.7 original:k8s.gcr.io/coredns:1.6.7} opener:0xc000410690 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:17:43.252755   10519 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7
	I1117 12:17:43.253338   10519 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 4.826927ms
	I1117 12:17:43.253529   10519 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 5.035427ms
	I1117 12:17:43.253998   10519 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 5.316993ms
	I1117 12:17:43.254302   10519 cache.go:96] cache image "k8s.gcr.io/pause:3.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 5.659928ms
	I1117 12:17:43.254517   10519 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 5.95847ms
	I1117 12:17:43.254764   10519 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 6.163847ms
	I1117 12:17:43.254829   10519 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 6.181342ms
	I1117 12:17:43.266853   10519 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:17:43.266872   10519 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:17:43.266883   10519 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:17:43.266915   10519 start.go:313] acquiring machines lock for missing-upgrade-20211117121608-2067: {Name:mkbaf96cca919b3b85f9ab0243580c671a5f2e3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:17:43.267054   10519 start.go:317] acquired machines lock for "missing-upgrade-20211117121608-2067" in 127.516µs
	I1117 12:17:43.267074   10519 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:17:43.267084   10519 fix.go:55] fixHost starting: m01
	I1117 12:17:43.267314   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:43.370808   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:43.370873   10519 fix.go:108] recreateIfNeeded on missing-upgrade-20211117121608-2067: state= err=unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:43.370898   10519 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:17:43.419639   10519 out.go:176] * docker "missing-upgrade-20211117121608-2067" container is missing, will recreate.
	I1117 12:17:43.419700   10519 delete.go:124] DEMOLISHING missing-upgrade-20211117121608-2067 ...
	I1117 12:17:43.419889   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:43.524593   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:17:43.524639   10519 stop.go:75] unable to get state: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:43.524656   10519 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:43.525058   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:43.628197   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:43.628242   10519 delete.go:82] Unable to get host status for missing-upgrade-20211117121608-2067, assuming it has already been deleted: state: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:43.628336   10519 cli_runner.go:115] Run: docker container inspect -f {{.Id}} missing-upgrade-20211117121608-2067
	W1117 12:17:43.732842   10519 cli_runner.go:162] docker container inspect -f {{.Id}} missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:17:43.732869   10519 kic.go:360] could not find the container missing-upgrade-20211117121608-2067 to remove it. will try anyways
	I1117 12:17:43.732941   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:43.840132   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:17:43.840179   10519 oci.go:83] error getting container status, will try to delete anyways: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:43.840296   10519 cli_runner.go:115] Run: docker exec --privileged -t missing-upgrade-20211117121608-2067 /bin/bash -c "sudo init 0"
	W1117 12:17:43.985771   10519 cli_runner.go:162] docker exec --privileged -t missing-upgrade-20211117121608-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:17:43.985799   10519 oci.go:656] error shutdown missing-upgrade-20211117121608-2067: docker exec --privileged -t missing-upgrade-20211117121608-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:44.991788   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:45.098348   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:45.098391   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:45.098405   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:17:45.098436   10519 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:45.658258   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:45.766387   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:45.766426   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:45.766438   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:17:45.766464   10519 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:46.849722   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:46.956525   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:46.956565   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:46.956574   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:17:46.956595   10519 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:48.277066   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:48.382272   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:48.382329   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:48.382351   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:17:48.382373   10519 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:49.965234   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:50.071408   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:50.071452   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:50.071462   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:17:50.071480   10519 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:52.412342   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:52.548341   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:52.548382   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:52.548405   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:17:52.548425   10519 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:57.058491   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:17:57.159361   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:17:57.159409   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:17:57.159419   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:17:57.159457   10519 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:00.385638   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:00.497337   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:00.497381   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:00.497391   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:00.497416   10519 oci.go:87] couldn't shut down missing-upgrade-20211117121608-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	 
	I1117 12:18:00.497517   10519 cli_runner.go:115] Run: docker rm -f -v missing-upgrade-20211117121608-2067
	I1117 12:18:00.609201   10519 cli_runner.go:115] Run: docker container inspect -f {{.Id}} missing-upgrade-20211117121608-2067
	W1117 12:18:00.717483   10519 cli_runner.go:162] docker container inspect -f {{.Id}} missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:00.717599   10519 cli_runner.go:115] Run: docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:18:00.824904   10519 cli_runner.go:162] docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:18:00.825026   10519 network_create.go:254] running [docker network inspect ] to gather additional debugging logs...
	I1117 12:18:00.825047   10519 cli_runner.go:115] Run: docker network inspect 
	W1117 12:18:00.937070   10519 cli_runner.go:162] docker network inspect  returned with exit code 1
	I1117 12:18:00.937106   10519 network_create.go:257] error running [docker network inspect ]: docker network inspect : exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: 
	I1117 12:18:00.937118   10519 network_create.go:259] output of [docker network inspect ]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: 
	
	** /stderr **
	W1117 12:18:00.939426   10519 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:18:00.939434   10519 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:18:01.949519   10519 start.go:126] createHost starting for "m01" (driver="docker")
	I1117 12:18:02.006920   10519 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:18:02.007081   10519 start.go:160] libmachine.API.Create for "missing-upgrade-20211117121608-2067" (driver="docker")
	I1117 12:18:02.007112   10519 client.go:168] LocalClient.Create starting
	I1117 12:18:02.007264   10519 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:18:02.007328   10519 main.go:130] libmachine: Decoding PEM data...
	I1117 12:18:02.007344   10519 main.go:130] libmachine: Parsing certificate...
	I1117 12:18:02.007410   10519 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:18:02.007473   10519 main.go:130] libmachine: Decoding PEM data...
	I1117 12:18:02.007485   10519 main.go:130] libmachine: Parsing certificate...
	I1117 12:18:02.008064   10519 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:18:02.116788   10519 cli_runner.go:162] docker network inspect missing-upgrade-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:18:02.116919   10519 network_create.go:254] running [docker network inspect missing-upgrade-20211117121608-2067] to gather additional debugging logs...
	I1117 12:18:02.116946   10519 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117121608-2067
	W1117 12:18:02.221099   10519 cli_runner.go:162] docker network inspect missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:02.221128   10519 network_create.go:257] error running [docker network inspect missing-upgrade-20211117121608-2067]: docker network inspect missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: missing-upgrade-20211117121608-2067
	I1117 12:18:02.221150   10519 network_create.go:259] output of [docker network inspect missing-upgrade-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: missing-upgrade-20211117121608-2067
	
	** /stderr **
	I1117 12:18:02.221233   10519 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:18:02.328605   10519 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000460730] misses:0}
	I1117 12:18:02.328642   10519 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:18:02.328658   10519 network_create.go:106] attempt to create docker network missing-upgrade-20211117121608-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:18:02.328750   10519 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117121608-2067
	I1117 12:18:08.180270   10519 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117121608-2067: (5.851493554s)
	I1117 12:18:08.180300   10519 network_create.go:90] docker network missing-upgrade-20211117121608-2067 192.168.49.0/24 created
	I1117 12:18:08.180321   10519 kic.go:106] calculated static IP "192.168.49.2" for the "missing-upgrade-20211117121608-2067" container
	I1117 12:18:08.180466   10519 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:18:08.298206   10519 cli_runner.go:115] Run: docker volume create missing-upgrade-20211117121608-2067 --label name.minikube.sigs.k8s.io=missing-upgrade-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:18:08.412284   10519 oci.go:102] Successfully created a docker volume missing-upgrade-20211117121608-2067
	I1117 12:18:08.412439   10519 cli_runner.go:115] Run: docker run --rm --name missing-upgrade-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20211117121608-2067 --entrypoint /usr/bin/test -v missing-upgrade-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:18:09.022419   10519 oci.go:106] Successfully prepared a docker volume missing-upgrade-20211117121608-2067
	E1117 12:18:09.022492   10519 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:18:09.022501   10519 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I1117 12:18:09.022511   10519 client.go:171] LocalClient.Create took 7.015434843s
	I1117 12:18:11.029916   10519 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:18:11.030016   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:11.166304   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:11.166404   10519 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:11.324444   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:11.448313   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:11.448427   10519 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:11.758395   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:11.882594   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:11.882710   10519 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:12.458026   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:12.582122   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	W1117 12:18:12.582243   10519 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	
	W1117 12:18:12.582278   10519 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:12.582297   10519 start.go:129] duration metric: createHost completed in 10.632828869s
	I1117 12:18:12.582384   10519 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:18:12.582450   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:12.703886   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:12.703990   10519 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:12.883423   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:13.013238   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:13.013347   10519 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:13.349043   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:13.501306   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:13.501412   10519 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:13.967175   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:14.081489   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	W1117 12:18:14.081593   10519 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	
	W1117 12:18:14.081613   10519 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:14.081634   10519 fix.go:57] fixHost completed within 30.814732276s
	I1117 12:18:14.081643   10519 start.go:80] releasing machines lock for "missing-upgrade-20211117121608-2067", held for 30.81476102s
	W1117 12:18:14.081659   10519 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:18:14.081770   10519 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:18:14.081777   10519 start.go:547] Will try again in 5 seconds ...
	I1117 12:18:19.091318   10519 start.go:313] acquiring machines lock for missing-upgrade-20211117121608-2067: {Name:mkbaf96cca919b3b85f9ab0243580c671a5f2e3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:18:19.091501   10519 start.go:317] acquired machines lock for "missing-upgrade-20211117121608-2067" in 147.673µs
	I1117 12:18:19.091553   10519 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:18:19.091560   10519 fix.go:55] fixHost starting: m01
	I1117 12:18:19.091949   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:19.197212   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:19.197267   10519 fix.go:108] recreateIfNeeded on missing-upgrade-20211117121608-2067: state= err=unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:19.197278   10519 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:18:19.223847   10519 out.go:176] * docker "missing-upgrade-20211117121608-2067" container is missing, will recreate.
	I1117 12:18:19.223883   10519 delete.go:124] DEMOLISHING missing-upgrade-20211117121608-2067 ...
	I1117 12:18:19.224052   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:19.370754   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:18:19.370792   10519 stop.go:75] unable to get state: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:19.370807   10519 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:19.371254   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:19.476480   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:19.476540   10519 delete.go:82] Unable to get host status for missing-upgrade-20211117121608-2067, assuming it has already been deleted: state: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:19.476628   10519 cli_runner.go:115] Run: docker container inspect -f {{.Id}} missing-upgrade-20211117121608-2067
	W1117 12:18:19.580541   10519 cli_runner.go:162] docker container inspect -f {{.Id}} missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:19.580565   10519 kic.go:360] could not find the container missing-upgrade-20211117121608-2067 to remove it. will try anyways
	I1117 12:18:19.580647   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:19.684000   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:18:19.684044   10519 oci.go:83] error getting container status, will try to delete anyways: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:19.684138   10519 cli_runner.go:115] Run: docker exec --privileged -t missing-upgrade-20211117121608-2067 /bin/bash -c "sudo init 0"
	W1117 12:18:19.790061   10519 cli_runner.go:162] docker exec --privileged -t missing-upgrade-20211117121608-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:18:19.790085   10519 oci.go:656] error shutdown missing-upgrade-20211117121608-2067: docker exec --privileged -t missing-upgrade-20211117121608-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:20.791496   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:20.899615   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:20.899672   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:20.899685   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:20.899709   10519 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:21.296929   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:21.402153   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:21.402197   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:21.402208   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:21.402243   10519 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:22.001526   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:22.106636   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:22.106679   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:22.106697   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:22.106720   10519 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:23.441395   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:23.543884   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:23.543925   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:23.543933   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:23.543956   10519 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:24.759349   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:24.865500   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:24.865541   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:24.865549   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:24.865572   10519 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:26.645714   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:26.748988   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:26.749031   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:26.749040   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:26.749062   10519 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:30.027739   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:30.131888   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:30.131926   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:30.131935   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:30.131957   10519 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:36.235840   10519 cli_runner.go:115] Run: docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}
	W1117 12:18:36.338688   10519 cli_runner.go:162] docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:18:36.338735   10519 oci.go:668] temporary error verifying shutdown: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:36.338748   10519 oci.go:670] temporary error: container missing-upgrade-20211117121608-2067 status is  but expect it to be exited
	I1117 12:18:36.338778   10519 oci.go:87] couldn't shut down missing-upgrade-20211117121608-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	 
	I1117 12:18:36.338860   10519 cli_runner.go:115] Run: docker rm -f -v missing-upgrade-20211117121608-2067
	I1117 12:18:36.443446   10519 cli_runner.go:115] Run: docker container inspect -f {{.Id}} missing-upgrade-20211117121608-2067
	W1117 12:18:36.547121   10519 cli_runner.go:162] docker container inspect -f {{.Id}} missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:36.547272   10519 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:18:36.651797   10519 cli_runner.go:115] Run: docker network rm missing-upgrade-20211117121608-2067
	I1117 12:18:39.472720   10519 cli_runner.go:168] Completed: docker network rm missing-upgrade-20211117121608-2067: (2.820886052s)
	W1117 12:18:39.472988   10519 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:18:39.472995   10519 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:18:40.474386   10519 start.go:126] createHost starting for "m01" (driver="docker")
	I1117 12:18:40.501720   10519 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:18:40.501945   10519 start.go:160] libmachine.API.Create for "missing-upgrade-20211117121608-2067" (driver="docker")
	I1117 12:18:40.502022   10519 client.go:168] LocalClient.Create starting
	I1117 12:18:40.502283   10519 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:18:40.502372   10519 main.go:130] libmachine: Decoding PEM data...
	I1117 12:18:40.502397   10519 main.go:130] libmachine: Parsing certificate...
	I1117 12:18:40.502522   10519 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:18:40.502583   10519 main.go:130] libmachine: Decoding PEM data...
	I1117 12:18:40.502598   10519 main.go:130] libmachine: Parsing certificate...
	I1117 12:18:40.503167   10519 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:18:40.606179   10519 cli_runner.go:162] docker network inspect missing-upgrade-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:18:40.606293   10519 network_create.go:254] running [docker network inspect missing-upgrade-20211117121608-2067] to gather additional debugging logs...
	I1117 12:18:40.606313   10519 cli_runner.go:115] Run: docker network inspect missing-upgrade-20211117121608-2067
	W1117 12:18:40.709148   10519 cli_runner.go:162] docker network inspect missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:40.709173   10519 network_create.go:257] error running [docker network inspect missing-upgrade-20211117121608-2067]: docker network inspect missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: missing-upgrade-20211117121608-2067
	I1117 12:18:40.709192   10519 network_create.go:259] output of [docker network inspect missing-upgrade-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: missing-upgrade-20211117121608-2067
	
	** /stderr **
	I1117 12:18:40.709294   10519 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:18:40.812246   10519 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000460730] amended:false}} dirty:map[] misses:0}
	I1117 12:18:40.812278   10519 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:18:40.812478   10519 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000460730] amended:true}} dirty:map[192.168.49.0:0xc000460730 192.168.58.0:0xc000aac2d0] misses:0}
	I1117 12:18:40.812491   10519 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:18:40.812497   10519 network_create.go:106] attempt to create docker network missing-upgrade-20211117121608-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:18:40.812583   10519 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117121608-2067
	I1117 12:18:41.972321   10519 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20211117121608-2067: (1.15969773s)
	I1117 12:18:41.972347   10519 network_create.go:90] docker network missing-upgrade-20211117121608-2067 192.168.58.0/24 created
	I1117 12:18:41.972373   10519 kic.go:106] calculated static IP "192.168.58.2" for the "missing-upgrade-20211117121608-2067" container
	I1117 12:18:41.972495   10519 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:18:42.077401   10519 cli_runner.go:115] Run: docker volume create missing-upgrade-20211117121608-2067 --label name.minikube.sigs.k8s.io=missing-upgrade-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:18:42.181535   10519 oci.go:102] Successfully created a docker volume missing-upgrade-20211117121608-2067
	I1117 12:18:42.181698   10519 cli_runner.go:115] Run: docker run --rm --name missing-upgrade-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20211117121608-2067 --entrypoint /usr/bin/test -v missing-upgrade-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:18:42.585723   10519 oci.go:106] Successfully prepared a docker volume missing-upgrade-20211117121608-2067
	E1117 12:18:42.585770   10519 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:18:42.585780   10519 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I1117 12:18:42.585782   10519 client.go:171] LocalClient.Create took 2.083765474s
	I1117 12:18:44.589123   10519 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:18:44.589250   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:44.693601   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:44.693692   10519 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:44.892256   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:44.994614   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:44.994712   10519 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:45.293805   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:45.397263   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:45.397356   10519 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:46.102304   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:46.204353   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	W1117 12:18:46.204457   10519 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	
	W1117 12:18:46.204495   10519 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:46.204508   10519 start.go:129] duration metric: createHost completed in 5.730113968s
	I1117 12:18:46.204567   10519 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:18:46.204630   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:46.308112   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:46.308208   10519 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:46.650193   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:46.754001   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:46.754105   10519 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:47.203511   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:47.306840   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	I1117 12:18:47.306954   10519 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:47.883476   10519 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067
	W1117 12:18:47.986882   10519 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067 returned with exit code 1
	W1117 12:18:47.986988   10519 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	
	W1117 12:18:47.987014   10519 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "missing-upgrade-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067
	I1117 12:18:47.987023   10519 fix.go:57] fixHost completed within 28.895643096s
	I1117 12:18:47.987032   10519 start.go:80] releasing machines lock for "missing-upgrade-20211117121608-2067", held for 28.895682653s
	W1117 12:18:47.987169   10519 out.go:241] * Failed to start docker container. Running "minikube delete -p missing-upgrade-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:18:48.013974   10519 out.go:176] 
	W1117 12:18:48.014038   10519 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:18:48.014046   10519 out.go:241] * 
	* 
	W1117 12:18:48.014620   10519 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:18:48.091568   10519 out.go:176] 

                                                
                                                
** /stderr **
version_upgrade_test.go:338: failed missing container upgrade from v1.9.1. args: out/minikube-darwin-amd64 start -p missing-upgrade-20211117121608-2067 --memory=2200 --alsologtostderr -v=1 --driver=docker  : exit status 80
version_upgrade_test.go:340: *** TestMissingContainerUpgrade FAILED at 2021-11-17 12:18:48.125999 -0800 PST m=+1723.855872671
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20211117121608-2067
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20211117121608-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "missing-upgrade-20211117121608-2067",
	        "Id": "767e4a1944e669d5c08cb609ebe81801bdb5202c8b6278610eb98ad0373400c8",
	        "Created": "2021-11-17T20:18:40.932138857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20211117121608-2067 -n missing-upgrade-20211117121608-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20211117121608-2067 -n missing-upgrade-20211117121608-2067: exit status 7 (148.016891ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:18:48.389591   11026 status.go:247] status error: host: state: unknown state "missing-upgrade-20211117121608-2067": docker container inspect missing-upgrade-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: missing-upgrade-20211117121608-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-20211117121608-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-20211117121608-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20211117121608-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20211117121608-2067: (5.368278607s)
--- FAIL: TestMissingContainerUpgrade (165.75s)

                                                
                                    
x
+
TestPause/serial/Start (46.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20211117122013-2067 --memory=2048 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:78: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-20211117122013-2067 --memory=2048 --install-addons=false --wait=all --driver=docker : exit status 80 (46.01584493s)

                                                
                                                
-- stdout --
	* [pause-20211117122013-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node pause-20211117122013-2067 in cluster pause-20211117122013-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20211117122013-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:20:21.111741   11917 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:20:54.720467   11917 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p pause-20211117122013-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:80: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-20211117122013-2067 --memory=2048 --install-addons=false --wait=all --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "c2255695e5479af24ae2ffb5f3c2a0ec3266b641456ec12e0c578d03d84142f0",
	        "Created": "2021-11-17T20:20:48.224874376Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (177.924696ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:21:00.325143   12294 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Start (46.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (58.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211117122048-2067 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:78: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20211117122048-2067 --no-kubernetes --driver=docker : exit status 80 (58.244542536s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211117122048-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting minikube without Kubernetes NoKubernetes-20211117122048-2067 in cluster NoKubernetes-20211117122048-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	* docker "NoKubernetes-20211117122048-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:21:02.876422   12197 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:21:41.762625   12197 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20211117122048-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:80: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20211117122048-2067 --no-kubernetes --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117122048-2067
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20211117122048-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-20211117122048-2067",
	        "Id": "8c0256a86feeb2aa81466c6b046cb8d9b5751458476eaad03e06395b4bea9011",
	        "Created": "2021-11-17T20:21:36.165972027Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117122048-2067 -n NoKubernetes-20211117122048-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117122048-2067 -n NoKubernetes-20211117122048-2067: exit status 7 (140.840399ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:21:47.127978   12697 status.go:247] status error: host: state: unknown state "NoKubernetes-20211117122048-2067": docker container inspect NoKubernetes-20211117122048-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117122048-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117122048-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (58.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (74.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20211117122013-2067 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-20211117122013-2067 --alsologtostderr -v=1 --driver=docker : exit status 80 (1m14.30602605s)

                                                
                                                
-- stdout --
	* [pause-20211117122013-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node pause-20211117122013-2067 in cluster pause-20211117122013-2067
	* Pulling base image ...
	* docker "pause-20211117122013-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20211117122013-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:21:00.385973   12299 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:21:00.386121   12299 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:21:00.386126   12299 out.go:310] Setting ErrFile to fd 2...
	I1117 12:21:00.386129   12299 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:21:00.386223   12299 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:21:00.386529   12299 out.go:304] Setting JSON to false
	I1117 12:21:00.416686   12299 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3035,"bootTime":1637177425,"procs":326,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:21:00.416889   12299 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:21:00.457190   12299 out.go:176] * [pause-20211117122013-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:21:00.457274   12299 notify.go:174] Checking for updates...
	I1117 12:21:00.518446   12299 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:21:00.589613   12299 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:21:00.653253   12299 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:21:00.751692   12299 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:21:00.752163   12299 config.go:176] Loaded profile config "pause-20211117122013-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:21:00.752688   12299 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:21:00.854312   12299 docker.go:132] docker version: linux-20.10.5
	I1117 12:21:00.854487   12299 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:21:01.022900   12299 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-17 20:21:00.975649966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:21:01.067328   12299 out.go:176] * Using the docker driver based on existing profile
	I1117 12:21:01.067355   12299 start.go:280] selected driver: docker
	I1117 12:21:01.067366   12299 start.go:775] validating driver "docker" against &{Name:pause-20211117122013-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:pause-20211117122013-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:21:01.067440   12299 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:21:01.067665   12299 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:21:01.324395   12299 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:52 SystemTime:2021-11-17 20:21:01.177233283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:21:01.326884   12299 cni.go:93] Creating CNI manager for ""
	I1117 12:21:01.326902   12299 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:21:01.326914   12299 start_flags.go:282] config:
	{Name:pause-20211117122013-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:pause-20211117122013-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:21:01.386273   12299 out.go:176] * Starting control plane node pause-20211117122013-2067 in cluster pause-20211117122013-2067
	I1117 12:21:01.386363   12299 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:21:01.412239   12299 out.go:176] * Pulling base image ...
	I1117 12:21:01.412290   12299 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:21:01.412347   12299 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:21:01.412404   12299 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:21:01.412428   12299 cache.go:57] Caching tarball of preloaded images
	I1117 12:21:01.412635   12299 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:21:01.412654   12299 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:21:01.413594   12299 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/pause-20211117122013-2067/config.json ...
	I1117 12:21:01.531095   12299 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:21:01.531113   12299 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:21:01.531127   12299 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:21:01.531199   12299 start.go:313] acquiring machines lock for pause-20211117122013-2067: {Name:mkf184beacaf079c083053d947afeffc259c32e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:21:01.531308   12299 start.go:317] acquired machines lock for "pause-20211117122013-2067" in 83.721µs
	I1117 12:21:01.531330   12299 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:21:01.531342   12299 fix.go:55] fixHost starting: 
	I1117 12:21:01.531655   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:01.634985   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:01.635055   12299 fix.go:108] recreateIfNeeded on pause-20211117122013-2067: state= err=unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:01.635073   12299 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:21:01.683573   12299 out.go:176] * docker "pause-20211117122013-2067" container is missing, will recreate.
	I1117 12:21:01.683603   12299 delete.go:124] DEMOLISHING pause-20211117122013-2067 ...
	I1117 12:21:01.683718   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:01.782267   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:21:01.782309   12299 stop.go:75] unable to get state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:01.782322   12299 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:01.782756   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:01.882869   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:01.882914   12299 delete.go:82] Unable to get host status for pause-20211117122013-2067, assuming it has already been deleted: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:01.883012   12299 cli_runner.go:115] Run: docker container inspect -f {{.Id}} pause-20211117122013-2067
	W1117 12:21:01.983545   12299 cli_runner.go:162] docker container inspect -f {{.Id}} pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:01.983583   12299 kic.go:360] could not find the container pause-20211117122013-2067 to remove it. will try anyways
	I1117 12:21:01.983681   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:02.082530   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:21:02.082579   12299 oci.go:83] error getting container status, will try to delete anyways: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:02.082670   12299 cli_runner.go:115] Run: docker exec --privileged -t pause-20211117122013-2067 /bin/bash -c "sudo init 0"
	W1117 12:21:02.186652   12299 cli_runner.go:162] docker exec --privileged -t pause-20211117122013-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:21:02.186678   12299 oci.go:656] error shutdown pause-20211117122013-2067: docker exec --privileged -t pause-20211117122013-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:03.189753   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:03.294190   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:03.294244   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:03.294253   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:03.294286   12299 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:03.856304   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:03.960751   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:03.960793   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:03.960811   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:03.960840   12299 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:05.050354   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:05.150095   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:05.150136   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:05.150150   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:05.150169   12299 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:06.468093   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:06.570302   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:06.570342   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:06.570351   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:06.570374   12299 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:08.156414   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:08.260010   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:08.260050   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:08.260072   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:08.260091   12299 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:10.606360   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:10.706968   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:10.707007   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:10.707015   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:10.707035   12299 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:15.219235   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:15.319876   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:15.319923   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:15.319935   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:15.319964   12299 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:18.545207   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:18.651507   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:18.651547   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:18.651557   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:18.651596   12299 oci.go:87] couldn't shut down pause-20211117122013-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	 
	I1117 12:21:18.651684   12299 cli_runner.go:115] Run: docker rm -f -v pause-20211117122013-2067
	I1117 12:21:18.754405   12299 cli_runner.go:115] Run: docker container inspect -f {{.Id}} pause-20211117122013-2067
	W1117 12:21:18.854590   12299 cli_runner.go:162] docker container inspect -f {{.Id}} pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:18.854705   12299 cli_runner.go:115] Run: docker network inspect pause-20211117122013-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:21:18.955375   12299 cli_runner.go:115] Run: docker network rm pause-20211117122013-2067
	I1117 12:21:22.429689   12299 cli_runner.go:168] Completed: docker network rm pause-20211117122013-2067: (3.474289016s)
	W1117 12:21:22.429981   12299 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:21:22.429989   12299 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:21:23.430547   12299 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:21:23.457736   12299 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:21:23.457885   12299 start.go:160] libmachine.API.Create for "pause-20211117122013-2067" (driver="docker")
	I1117 12:21:23.457919   12299 client.go:168] LocalClient.Create starting
	I1117 12:21:23.458082   12299 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:21:23.458149   12299 main.go:130] libmachine: Decoding PEM data...
	I1117 12:21:23.458177   12299 main.go:130] libmachine: Parsing certificate...
	I1117 12:21:23.458279   12299 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:21:23.458322   12299 main.go:130] libmachine: Decoding PEM data...
	I1117 12:21:23.458334   12299 main.go:130] libmachine: Parsing certificate...
	I1117 12:21:23.459131   12299 cli_runner.go:115] Run: docker network inspect pause-20211117122013-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:21:23.561722   12299 cli_runner.go:162] docker network inspect pause-20211117122013-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:21:23.561821   12299 network_create.go:254] running [docker network inspect pause-20211117122013-2067] to gather additional debugging logs...
	I1117 12:21:23.561842   12299 cli_runner.go:115] Run: docker network inspect pause-20211117122013-2067
	W1117 12:21:23.679528   12299 cli_runner.go:162] docker network inspect pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:23.679553   12299 network_create.go:257] error running [docker network inspect pause-20211117122013-2067]: docker network inspect pause-20211117122013-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20211117122013-2067
	I1117 12:21:23.679596   12299 network_create.go:259] output of [docker network inspect pause-20211117122013-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20211117122013-2067
	
	** /stderr **
	I1117 12:21:23.679690   12299 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:21:23.778057   12299 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007a6268] misses:0}
	I1117 12:21:23.778093   12299 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:21:23.778108   12299 network_create.go:106] attempt to create docker network pause-20211117122013-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:21:23.778187   12299 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117122013-2067
	W1117 12:21:23.876976   12299 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117122013-2067 returned with exit code 1
	W1117 12:21:23.877016   12299 network_create.go:98] failed to create docker network pause-20211117122013-2067 192.168.49.0/24, will retry: subnet is taken
	I1117 12:21:23.877240   12299 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007a6268] amended:false}} dirty:map[] misses:0}
	I1117 12:21:23.877256   12299 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:21:23.877420   12299 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007a6268] amended:true}} dirty:map[192.168.49.0:0xc0007a6268 192.168.58.0:0xc00000ef18] misses:0}
	I1117 12:21:23.877433   12299 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:21:23.877440   12299 network_create.go:106] attempt to create docker network pause-20211117122013-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:21:23.877517   12299 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117122013-2067
	I1117 12:21:29.526431   12299 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117122013-2067: (5.648890854s)
	I1117 12:21:29.526453   12299 network_create.go:90] docker network pause-20211117122013-2067 192.168.58.0/24 created
	I1117 12:21:29.526471   12299 kic.go:106] calculated static IP "192.168.58.2" for the "pause-20211117122013-2067" container
	I1117 12:21:29.526581   12299 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:21:29.624793   12299 cli_runner.go:115] Run: docker volume create pause-20211117122013-2067 --label name.minikube.sigs.k8s.io=pause-20211117122013-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:21:29.721650   12299 oci.go:102] Successfully created a docker volume pause-20211117122013-2067
	I1117 12:21:29.721772   12299 cli_runner.go:115] Run: docker run --rm --name pause-20211117122013-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-20211117122013-2067 --entrypoint /usr/bin/test -v pause-20211117122013-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:21:30.143574   12299 oci.go:106] Successfully prepared a docker volume pause-20211117122013-2067
	E1117 12:21:30.143627   12299 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:21:30.143643   12299 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:21:30.143647   12299 client.go:171] LocalClient.Create took 6.685763748s
	I1117 12:21:30.143668   12299 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:21:30.143779   12299 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117122013-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:21:32.143988   12299 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:21:32.144107   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:21:32.262768   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:32.262854   12299 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:32.420720   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:21:32.550106   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:32.550182   12299 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:32.856061   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:21:32.975166   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:32.975245   12299 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:33.556103   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:21:33.678051   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	W1117 12:21:33.678165   12299 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	W1117 12:21:33.678217   12299 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:33.678240   12299 start.go:129] duration metric: createHost completed in 10.247712238s
	I1117 12:21:33.678326   12299 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:21:33.678403   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:21:33.798569   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:33.798646   12299 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:33.978333   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:21:34.095260   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:34.095386   12299 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:34.431369   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:21:34.570231   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:34.570354   12299 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:35.033132   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:21:35.151332   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	W1117 12:21:35.151488   12299 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	W1117 12:21:35.151520   12299 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:35.151540   12299 fix.go:57] fixHost completed within 33.620411894s
	I1117 12:21:35.151558   12299 start.go:80] releasing machines lock for "pause-20211117122013-2067", held for 33.620451517s
	W1117 12:21:35.151579   12299 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:21:35.151734   12299 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:21:35.151750   12299 start.go:547] Will try again in 5 seconds ...
	I1117 12:21:36.188218   12299 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117122013-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.044416576s)
	I1117 12:21:36.188239   12299 kic.go:188] duration metric: took 6.044610 seconds to extract preloaded images to volume
	I1117 12:21:40.156169   12299 start.go:313] acquiring machines lock for pause-20211117122013-2067: {Name:mkf184beacaf079c083053d947afeffc259c32e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:21:40.156433   12299 start.go:317] acquired machines lock for "pause-20211117122013-2067" in 208.265µs
	I1117 12:21:40.156482   12299 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:21:40.156493   12299 fix.go:55] fixHost starting: 
	I1117 12:21:40.156993   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:40.255155   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:40.255199   12299 fix.go:108] recreateIfNeeded on pause-20211117122013-2067: state= err=unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:40.255210   12299 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:21:40.281803   12299 out.go:176] * docker "pause-20211117122013-2067" container is missing, will recreate.
	I1117 12:21:40.281822   12299 delete.go:124] DEMOLISHING pause-20211117122013-2067 ...
	I1117 12:21:40.281931   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:40.395923   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:21:40.395960   12299 stop.go:75] unable to get state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:40.395977   12299 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:40.396402   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:40.496366   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:40.496409   12299 delete.go:82] Unable to get host status for pause-20211117122013-2067, assuming it has already been deleted: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:40.496498   12299 cli_runner.go:115] Run: docker container inspect -f {{.Id}} pause-20211117122013-2067
	W1117 12:21:40.597347   12299 cli_runner.go:162] docker container inspect -f {{.Id}} pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:40.597379   12299 kic.go:360] could not find the container pause-20211117122013-2067 to remove it. will try anyways
	I1117 12:21:40.597493   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:40.696909   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:21:40.696951   12299 oci.go:83] error getting container status, will try to delete anyways: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:40.697046   12299 cli_runner.go:115] Run: docker exec --privileged -t pause-20211117122013-2067 /bin/bash -c "sudo init 0"
	W1117 12:21:40.798382   12299 cli_runner.go:162] docker exec --privileged -t pause-20211117122013-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:21:40.798412   12299 oci.go:656] error shutdown pause-20211117122013-2067: docker exec --privileged -t pause-20211117122013-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:41.806077   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:41.908310   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:41.908349   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:41.908355   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:41.908376   12299 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:42.306183   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:42.407712   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:42.407752   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:42.407760   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:42.407783   12299 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:43.006115   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:43.109407   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:43.109456   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:43.109466   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:43.109495   12299 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:44.436258   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:44.539499   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:44.539537   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:44.539546   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:44.539570   12299 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:45.758479   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:45.861531   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:45.861588   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:45.861602   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:45.861633   12299 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:47.641811   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:47.753363   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:47.753401   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:47.753420   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:47.753442   12299 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:51.025468   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:51.126161   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:51.126209   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:51.126218   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:51.126243   12299 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:57.230625   12299 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:21:57.335981   12299 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:21:57.336019   12299 oci.go:668] temporary error verifying shutdown: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:21:57.336028   12299 oci.go:670] temporary error: container pause-20211117122013-2067 status is  but expect it to be exited
	I1117 12:21:57.336055   12299 oci.go:87] couldn't shut down pause-20211117122013-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	 
	I1117 12:21:57.336142   12299 cli_runner.go:115] Run: docker rm -f -v pause-20211117122013-2067
	I1117 12:21:57.437150   12299 cli_runner.go:115] Run: docker container inspect -f {{.Id}} pause-20211117122013-2067
	W1117 12:21:57.534892   12299 cli_runner.go:162] docker container inspect -f {{.Id}} pause-20211117122013-2067 returned with exit code 1
	I1117 12:21:57.534999   12299 cli_runner.go:115] Run: docker network inspect pause-20211117122013-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:21:57.634443   12299 cli_runner.go:115] Run: docker network rm pause-20211117122013-2067
	I1117 12:22:01.019513   12299 cli_runner.go:168] Completed: docker network rm pause-20211117122013-2067: (3.385032918s)
	W1117 12:22:01.019793   12299 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:22:01.019800   12299 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:22:02.027568   12299 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:22:02.054771   12299 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:22:02.054974   12299 start.go:160] libmachine.API.Create for "pause-20211117122013-2067" (driver="docker")
	I1117 12:22:02.055012   12299 client.go:168] LocalClient.Create starting
	I1117 12:22:02.055201   12299 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:22:02.055285   12299 main.go:130] libmachine: Decoding PEM data...
	I1117 12:22:02.055312   12299 main.go:130] libmachine: Parsing certificate...
	I1117 12:22:02.055397   12299 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:22:02.055456   12299 main.go:130] libmachine: Decoding PEM data...
	I1117 12:22:02.055472   12299 main.go:130] libmachine: Parsing certificate...
	I1117 12:22:02.056283   12299 cli_runner.go:115] Run: docker network inspect pause-20211117122013-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:22:02.159097   12299 cli_runner.go:162] docker network inspect pause-20211117122013-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:22:02.159198   12299 network_create.go:254] running [docker network inspect pause-20211117122013-2067] to gather additional debugging logs...
	I1117 12:22:02.159213   12299 cli_runner.go:115] Run: docker network inspect pause-20211117122013-2067
	W1117 12:22:02.260338   12299 cli_runner.go:162] docker network inspect pause-20211117122013-2067 returned with exit code 1
	I1117 12:22:02.260361   12299 network_create.go:257] error running [docker network inspect pause-20211117122013-2067]: docker network inspect pause-20211117122013-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20211117122013-2067
	I1117 12:22:02.260373   12299 network_create.go:259] output of [docker network inspect pause-20211117122013-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20211117122013-2067
	
	** /stderr **
	I1117 12:22:02.260464   12299 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:22:02.361498   12299 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007a6268] amended:true}} dirty:map[192.168.49.0:0xc0007a6268 192.168.58.0:0xc00000ef18] misses:0}
	I1117 12:22:02.361533   12299 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:22:02.361710   12299 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007a6268] amended:true}} dirty:map[192.168.49.0:0xc0007a6268 192.168.58.0:0xc00000ef18] misses:1}
	I1117 12:22:02.361718   12299 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:22:02.361877   12299 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007a6268] amended:true}} dirty:map[192.168.49.0:0xc0007a6268 192.168.58.0:0xc00000ef18 192.168.67.0:0xc0004ac168] misses:1}
	I1117 12:22:02.361889   12299 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:22:02.361895   12299 network_create.go:106] attempt to create docker network pause-20211117122013-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:22:02.361991   12299 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117122013-2067
	W1117 12:22:02.460697   12299 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117122013-2067 returned with exit code 1
	W1117 12:22:02.460744   12299 network_create.go:98] failed to create docker network pause-20211117122013-2067 192.168.67.0/24, will retry: subnet is taken
	I1117 12:22:02.460953   12299 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007a6268] amended:true}} dirty:map[192.168.49.0:0xc0007a6268 192.168.58.0:0xc00000ef18 192.168.67.0:0xc0004ac168] misses:2}
	I1117 12:22:02.460971   12299 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:22:02.461151   12299 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007a6268] amended:true}} dirty:map[192.168.49.0:0xc0007a6268 192.168.58.0:0xc00000ef18 192.168.67.0:0xc0004ac168 192.168.76.0:0xc0001163a0] misses:2}
	I1117 12:22:02.461167   12299 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:22:02.461174   12299 network_create.go:106] attempt to create docker network pause-20211117122013-2067 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1117 12:22:02.461268   12299 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117122013-2067
	I1117 12:22:08.208214   12299 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20211117122013-2067: (5.74693534s)
	I1117 12:22:08.208236   12299 network_create.go:90] docker network pause-20211117122013-2067 192.168.76.0/24 created
	I1117 12:22:08.208246   12299 kic.go:106] calculated static IP "192.168.76.2" for the "pause-20211117122013-2067" container
	I1117 12:22:08.208361   12299 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:22:08.327667   12299 cli_runner.go:115] Run: docker volume create pause-20211117122013-2067 --label name.minikube.sigs.k8s.io=pause-20211117122013-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:22:08.427411   12299 oci.go:102] Successfully created a docker volume pause-20211117122013-2067
	I1117 12:22:08.427531   12299 cli_runner.go:115] Run: docker run --rm --name pause-20211117122013-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-20211117122013-2067 --entrypoint /usr/bin/test -v pause-20211117122013-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:22:08.840167   12299 oci.go:106] Successfully prepared a docker volume pause-20211117122013-2067
	E1117 12:22:08.840216   12299 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:22:08.840224   12299 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:22:08.840228   12299 client.go:171] LocalClient.Create took 6.7852499s
	I1117 12:22:08.840243   12299 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:22:08.840358   12299 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20211117122013-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:22:10.841493   12299 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:22:10.841664   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:22:10.981873   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:22:10.982002   12299 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:22:11.182707   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:22:11.313573   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:22:11.313669   12299 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:22:11.621752   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:22:11.738945   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:22:11.739034   12299 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:22:12.449550   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:22:12.565262   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	W1117 12:22:12.565364   12299 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	W1117 12:22:12.565382   12299 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:22:12.565400   12299 start.go:129] duration metric: createHost completed in 10.537834965s
	I1117 12:22:12.565486   12299 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:22:12.565629   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:22:12.684915   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:22:12.685001   12299 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:22:13.026951   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:22:13.154074   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:22:13.154171   12299 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:22:13.604977   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:22:13.725268   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	I1117 12:22:13.725356   12299 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:22:14.305962   12299 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067
	W1117 12:22:14.426036   12299 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067 returned with exit code 1
	W1117 12:22:14.426117   12299 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	W1117 12:22:14.426129   12299 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "pause-20211117122013-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20211117122013-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	I1117 12:22:14.426139   12299 fix.go:57] fixHost completed within 34.269863759s
	I1117 12:22:14.426147   12299 start.go:80] releasing machines lock for "pause-20211117122013-2067", held for 34.269915002s
	W1117 12:22:14.426291   12299 out.go:241] * Failed to start docker container. Running "minikube delete -p pause-20211117122013-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p pause-20211117122013-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:22:14.499740   12299 out.go:176] 
	W1117 12:22:14.499933   12299 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:22:14.499954   12299 out.go:241] * 
	* 
	W1117 12:22:14.501063   12299 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:22:14.621819   12299 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:92: failed to second start a running minikube with args: "out/minikube-darwin-amd64 start -p pause-20211117122013-2067 --alsologtostderr -v=1 --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "53ce51392600321834b921a7fd812deb2794d0587d1318bb0f48e5ca48bcafd6",
	        "Created": "2021-11-17T20:22:02.56772705Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (158.34178ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:15.069781   12952 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (74.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (14.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20211117122048-2067
no_kubernetes_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p NoKubernetes-20211117122048-2067: exit status 82 (14.728096996s)

                                                
                                                
-- stdout --
	* Stopping node "NoKubernetes-20211117122048-2067"  ...
	* Stopping node "NoKubernetes-20211117122048-2067"  ...
	* Stopping node "NoKubernetes-20211117122048-2067"  ...
	* Stopping node "NoKubernetes-20211117122048-2067"  ...
	* Stopping node "NoKubernetes-20211117122048-2067"  ...
	* Stopping node "NoKubernetes-20211117122048-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect NoKubernetes-20211117122048-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117122048-2067
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:102: Failed to stop minikube "out/minikube-darwin-amd64 stop -p NoKubernetes-20211117122048-2067" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117122048-2067
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20211117122048-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-20211117122048-2067",
	        "Id": "8c0256a86feeb2aa81466c6b046cb8d9b5751458476eaad03e06395b4bea9011",
	        "Created": "2021-11-17T20:21:36.165972027Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117122048-2067 -n NoKubernetes-20211117122048-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117122048-2067 -n NoKubernetes-20211117122048-2067: exit status 7 (142.431373ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:03.276699   12818 status.go:247] status error: host: state: unknown state "NoKubernetes-20211117122048-2067": docker container inspect NoKubernetes-20211117122048-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117122048-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117122048-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Stop (14.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (76.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211117122048-2067 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20211117122048-2067 --driver=docker : exit status 80 (1m16.017437788s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211117122048-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes NoKubernetes-20211117122048-2067 in cluster NoKubernetes-20211117122048-2067
	* Pulling base image ...
	* docker "NoKubernetes-20211117122048-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	* docker "NoKubernetes-20211117122048-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:32.692821   12823 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 12:23:13.718671   12823 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-20211117122048-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:135: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20211117122048-2067 --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117122048-2067
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20211117122048-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-20211117122048-2067",
	        "Id": "da5a7191ee0b7cf6d7605ff32bfd6547f6bcf2f413df0b38614e85d0367b2d52",
	        "Created": "2021-11-17T20:23:05.747750041Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117122048-2067 -n NoKubernetes-20211117122048-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117122048-2067 -n NoKubernetes-20211117122048-2067: exit status 7 (174.1755ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:23:19.598320   13554 status.go:247] status error: host: state: unknown state "NoKubernetes-20211117122048-2067": docker container inspect NoKubernetes-20211117122048-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117122048-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117122048-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (76.32s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20211117122013-2067 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p pause-20211117122013-2067 --alsologtostderr -v=5: exit status 80 (340.603751ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:22:15.111423   12957 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:22:15.112987   12957 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:22:15.112992   12957 out.go:310] Setting ErrFile to fd 2...
	I1117 12:22:15.112995   12957 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:22:15.113075   12957 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:22:15.113240   12957 out.go:304] Setting JSON to false
	I1117 12:22:15.113256   12957 mustload.go:65] Loading cluster: pause-20211117122013-2067
	I1117 12:22:15.113476   12957 config.go:176] Loaded profile config "pause-20211117122013-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:22:15.113841   12957 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:22:15.215304   12957 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:15.381722   12957 out.go:176] 
	W1117 12:22:15.381965   12957 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	W1117 12:22:15.381982   12957 out.go:241] * 
	* 
	W1117 12:22:15.384830   12957 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:22:15.409963   12957 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-darwin-amd64 pause -p pause-20211117122013-2067 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "53ce51392600321834b921a7fd812deb2794d0587d1318bb0f48e5ca48bcafd6",
	        "Created": "2021-11-17T20:22:02.56772705Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (145.359651ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:15.659777   12966 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "53ce51392600321834b921a7fd812deb2794d0587d1318bb0f48e5ca48bcafd6",
	        "Created": "2021-11-17T20:22:02.56772705Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (145.493835ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:15.911712   12975 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20211117122013-2067 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20211117122013-2067 --output=json --layout=cluster: exit status 7 (143.909564ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20211117122013-2067","StatusCode":100,"StatusName":"Starting","Step":"Creating Container","StepDetail":"* Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"pause-20211117122013-2067","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:16.055443   12980 status.go:258] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	E1117 12:22:16.055451   12980 status.go:261] The "pause-20211117122013-2067" host does not exist!
	E1117 12:22:16.055738   12980 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E1117 12:22:16.055764   12980 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E1117 12:22:16.055770   12980 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E1117 12:22:16.055779   12980 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E1117 12:22:16.055791   12980 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:198: incorrect status code: 100
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "53ce51392600321834b921a7fd812deb2794d0587d1318bb0f48e5ca48bcafd6",
	        "Created": "2021-11-17T20:22:02.56772705Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (143.516209ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:16.306682   12989 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20211117122013-2067 --alsologtostderr -v=5
pause_test.go:119: (dbg) Non-zero exit: out/minikube-darwin-amd64 unpause -p pause-20211117122013-2067 --alsologtostderr -v=5: exit status 80 (255.638283ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:22:16.347355   12994 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:22:16.348028   12994 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:22:16.348034   12994 out.go:310] Setting ErrFile to fd 2...
	I1117 12:22:16.348038   12994 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:22:16.348120   12994 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:22:16.348403   12994 mustload.go:65] Loading cluster: pause-20211117122013-2067
	I1117 12:22:16.348615   12994 config.go:176] Loaded profile config "pause-20211117122013-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:22:16.348964   12994 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:22:16.453045   12994 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:16.480701   12994 out.go:176] 
	W1117 12:22:16.480937   12994 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	W1117 12:22:16.480961   12994 out.go:241] * 
	* 
	W1117 12:22:16.484497   12994 out.go:241] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:22:16.561650   12994 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:121: failed to unpause minikube with args: "out/minikube-darwin-amd64 unpause -p pause-20211117122013-2067 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Unpause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "53ce51392600321834b921a7fd812deb2794d0587d1318bb0f48e5ca48bcafd6",
	        "Created": "2021-11-17T20:22:02.56772705Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (146.29785ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:16.818724   13003 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Unpause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "53ce51392600321834b921a7fd812deb2794d0587d1318bb0f48e5ca48bcafd6",
	        "Created": "2021-11-17T20:22:02.56772705Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (145.973036ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:17.079320   13012 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20211117122013-2067 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p pause-20211117122013-2067 --alsologtostderr -v=5: exit status 80 (198.875362ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:22:17.123507   13017 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:22:17.123649   13017 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:22:17.123653   13017 out.go:310] Setting ErrFile to fd 2...
	I1117 12:22:17.123664   13017 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:22:17.123747   13017 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:22:17.123924   13017 out.go:304] Setting JSON to false
	I1117 12:22:17.123940   13017 mustload.go:65] Loading cluster: pause-20211117122013-2067
	I1117 12:22:17.124188   13017 config.go:176] Loaded profile config "pause-20211117122013-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:22:17.124539   13017 cli_runner.go:115] Run: docker container inspect pause-20211117122013-2067 --format={{.State.Status}}
	W1117 12:22:17.227975   13017 cli_runner.go:162] docker container inspect pause-20211117122013-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:22:17.255436   13017 out.go:176] 
	W1117 12:22:17.255623   13017 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067
	
	W1117 12:22:17.255637   13017 out.go:241] * 
	* 
	W1117 12:22:17.258969   13017 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:22:17.280072   13017 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-darwin-amd64 pause -p pause-20211117122013-2067 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "53ce51392600321834b921a7fd812deb2794d0587d1318bb0f48e5ca48bcafd6",
	        "Created": "2021-11-17T20:22:02.56772705Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (143.827046ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:17.535317   13026 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117122013-2067
helpers_test.go:235: (dbg) docker inspect pause-20211117122013-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-20211117122013-2067",
	        "Id": "53ce51392600321834b921a7fd812deb2794d0587d1318bb0f48e5ca48bcafd6",
	        "Created": "2021-11-17T20:22:02.56772705Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117122013-2067 -n pause-20211117122013-2067: exit status 7 (144.501491ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:22:17.791504   13035 status.go:247] status error: host: state: unknown state "pause-20211117122013-2067": docker container inspect pause-20211117122013-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-20211117122013-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-20211117122013-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20211117121607-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p auto-20211117121607-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : exit status 80 (47.647987759s)

                                                
                                                
-- stdout --
	* [auto-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node auto-20211117121607-2067 in cluster auto-20211117121607-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "auto-20211117121607-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:26:30.967954   14833 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:26:30.968091   14833 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:26:30.968096   14833 out.go:310] Setting ErrFile to fd 2...
	I1117 12:26:30.968099   14833 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:26:30.968179   14833 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:26:30.968498   14833 out.go:304] Setting JSON to false
	I1117 12:26:30.992711   14833 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3365,"bootTime":1637177425,"procs":321,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:26:30.992809   14833 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:26:31.020137   14833 out.go:176] * [auto-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:26:31.075106   14833 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:26:31.020268   14833 notify.go:174] Checking for updates...
	I1117 12:26:31.100298   14833 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:26:31.126468   14833 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:26:31.152598   14833 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:26:31.153434   14833 config.go:176] Loaded profile config "cert-expiration-20211117122341-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:26:31.153593   14833 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:26:31.153655   14833 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:26:31.247184   14833 docker.go:132] docker version: linux-20.10.5
	I1117 12:26:31.247366   14833 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:26:31.403125   14833 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:26:31.35796554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:26:31.451264   14833 out.go:176] * Using the docker driver based on user configuration
	I1117 12:26:31.451304   14833 start.go:280] selected driver: docker
	I1117 12:26:31.451321   14833 start.go:775] validating driver "docker" against <nil>
	I1117 12:26:31.451342   14833 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:26:31.454769   14833 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:26:31.608672   14833 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:26:31.5647403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:26:31.608779   14833 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:26:31.608888   14833 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:26:31.608903   14833 cni.go:93] Creating CNI manager for ""
	I1117 12:26:31.608915   14833 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:26:31.608920   14833 start_flags.go:282] config:
	{Name:auto-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:auto-20211117121607-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:26:31.657479   14833 out.go:176] * Starting control plane node auto-20211117121607-2067 in cluster auto-20211117121607-2067
	I1117 12:26:31.657611   14833 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:26:31.683404   14833 out.go:176] * Pulling base image ...
	I1117 12:26:31.683546   14833 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:26:31.683546   14833 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:26:31.683663   14833 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:26:31.683692   14833 cache.go:57] Caching tarball of preloaded images
	I1117 12:26:31.683894   14833 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:26:31.684502   14833 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:26:31.685009   14833 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/auto-20211117121607-2067/config.json ...
	I1117 12:26:31.685104   14833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/auto-20211117121607-2067/config.json: {Name:mkc95dac722739ece277755717e05b9b6e001971 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:26:31.802510   14833 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:26:31.802532   14833 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:26:31.802544   14833 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:26:31.802581   14833 start.go:313] acquiring machines lock for auto-20211117121607-2067: {Name:mkc8fe56948bd8f628a62b3d5824524a2fa61597 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:26:31.802715   14833 start.go:317] acquired machines lock for "auto-20211117121607-2067" in 122.972µs
	I1117 12:26:31.802742   14833 start.go:89] Provisioning new machine with config: &{Name:auto-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:auto-20211117121607-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:26:31.802796   14833 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:26:31.829597   14833 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:26:31.829919   14833 start.go:160] libmachine.API.Create for "auto-20211117121607-2067" (driver="docker")
	I1117 12:26:31.829971   14833 client.go:168] LocalClient.Create starting
	I1117 12:26:31.830212   14833 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:26:31.851332   14833 main.go:130] libmachine: Decoding PEM data...
	I1117 12:26:31.851385   14833 main.go:130] libmachine: Parsing certificate...
	I1117 12:26:31.851527   14833 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:26:31.851603   14833 main.go:130] libmachine: Decoding PEM data...
	I1117 12:26:31.851621   14833 main.go:130] libmachine: Parsing certificate...
	I1117 12:26:31.852664   14833 cli_runner.go:115] Run: docker network inspect auto-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:26:31.954783   14833 cli_runner.go:162] docker network inspect auto-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:26:31.954892   14833 network_create.go:254] running [docker network inspect auto-20211117121607-2067] to gather additional debugging logs...
	I1117 12:26:31.954911   14833 cli_runner.go:115] Run: docker network inspect auto-20211117121607-2067
	W1117 12:26:32.059473   14833 cli_runner.go:162] docker network inspect auto-20211117121607-2067 returned with exit code 1
	I1117 12:26:32.059500   14833 network_create.go:257] error running [docker network inspect auto-20211117121607-2067]: docker network inspect auto-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20211117121607-2067
	I1117 12:26:32.059519   14833 network_create.go:259] output of [docker network inspect auto-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20211117121607-2067
	
	** /stderr **
	I1117 12:26:32.059634   14833 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:26:32.163172   14833 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000692070] misses:0}
	I1117 12:26:32.163208   14833 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:26:32.163225   14833 network_create.go:106] attempt to create docker network auto-20211117121607-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:26:32.163303   14833 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20211117121607-2067
	I1117 12:26:37.008516   14833 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20211117121607-2067: (4.845189228s)
	I1117 12:26:37.008540   14833 network_create.go:90] docker network auto-20211117121607-2067 192.168.49.0/24 created
	I1117 12:26:37.008557   14833 kic.go:106] calculated static IP "192.168.49.2" for the "auto-20211117121607-2067" container
	I1117 12:26:37.008668   14833 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:26:37.110284   14833 cli_runner.go:115] Run: docker volume create auto-20211117121607-2067 --label name.minikube.sigs.k8s.io=auto-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:26:37.212134   14833 oci.go:102] Successfully created a docker volume auto-20211117121607-2067
	I1117 12:26:37.212300   14833 cli_runner.go:115] Run: docker run --rm --name auto-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20211117121607-2067 --entrypoint /usr/bin/test -v auto-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:26:37.692498   14833 oci.go:106] Successfully prepared a docker volume auto-20211117121607-2067
	E1117 12:26:37.692552   14833 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:26:37.692559   14833 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:26:37.692573   14833 client.go:171] LocalClient.Create took 5.862628649s
	I1117 12:26:37.692584   14833 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:26:37.692692   14833 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:26:39.702476   14833 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:26:39.702598   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:26:39.826664   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:26:39.826777   14833 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:40.103414   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:26:40.224622   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:26:40.224705   14833 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:40.773009   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:26:40.889333   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:26:40.889417   14833 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:41.552555   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:26:41.655547   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	W1117 12:26:41.655623   14833 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	
	W1117 12:26:41.655638   14833 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:41.655647   14833 start.go:129] duration metric: createHost completed in 9.852908894s
	I1117 12:26:41.655654   14833 start.go:80] releasing machines lock for "auto-20211117121607-2067", held for 9.852992957s
	W1117 12:26:41.655669   14833 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:26:41.656137   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:41.756188   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:41.756231   14833 delete.go:82] Unable to get host status for auto-20211117121607-2067, assuming it has already been deleted: state: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	W1117 12:26:41.756476   14833 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:26:41.756498   14833 start.go:547] Will try again in 5 seconds ...
	I1117 12:26:43.758863   14833 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.066178244s)
	I1117 12:26:43.758895   14833 kic.go:188] duration metric: took 6.066350 seconds to extract preloaded images to volume
	I1117 12:26:46.756686   14833 start.go:313] acquiring machines lock for auto-20211117121607-2067: {Name:mkc8fe56948bd8f628a62b3d5824524a2fa61597 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:26:46.756870   14833 start.go:317] acquired machines lock for "auto-20211117121607-2067" in 152.613µs
	I1117 12:26:46.756924   14833 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:26:46.756936   14833 fix.go:55] fixHost starting: 
	I1117 12:26:46.757424   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:46.864160   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:46.864207   14833 fix.go:108] recreateIfNeeded on auto-20211117121607-2067: state= err=unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:46.864224   14833 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:26:46.891181   14833 out.go:176] * docker "auto-20211117121607-2067" container is missing, will recreate.
	I1117 12:26:46.891236   14833 delete.go:124] DEMOLISHING auto-20211117121607-2067 ...
	I1117 12:26:46.891465   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:46.995239   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:26:46.995280   14833 stop.go:75] unable to get state: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:46.995292   14833 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:46.995720   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:47.098815   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:47.098859   14833 delete.go:82] Unable to get host status for auto-20211117121607-2067, assuming it has already been deleted: state: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:47.098938   14833 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20211117121607-2067
	W1117 12:26:47.202547   14833 cli_runner.go:162] docker container inspect -f {{.Id}} auto-20211117121607-2067 returned with exit code 1
	I1117 12:26:47.202584   14833 kic.go:360] could not find the container auto-20211117121607-2067 to remove it. will try anyways
	I1117 12:26:47.202673   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:47.304355   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:26:47.304395   14833 oci.go:83] error getting container status, will try to delete anyways: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:47.304488   14833 cli_runner.go:115] Run: docker exec --privileged -t auto-20211117121607-2067 /bin/bash -c "sudo init 0"
	W1117 12:26:47.407970   14833 cli_runner.go:162] docker exec --privileged -t auto-20211117121607-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:26:47.407999   14833 oci.go:656] error shutdown auto-20211117121607-2067: docker exec --privileged -t auto-20211117121607-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:48.414890   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:48.517448   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:48.517499   14833 oci.go:668] temporary error verifying shutdown: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:48.517507   14833 oci.go:670] temporary error: container auto-20211117121607-2067 status is  but expect it to be exited
	I1117 12:26:48.517527   14833 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:48.985985   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:49.092571   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:49.092609   14833 oci.go:668] temporary error verifying shutdown: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:49.092625   14833 oci.go:670] temporary error: container auto-20211117121607-2067 status is  but expect it to be exited
	I1117 12:26:49.092643   14833 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:49.985959   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:50.091439   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:50.091484   14833 oci.go:668] temporary error verifying shutdown: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:50.091504   14833 oci.go:670] temporary error: container auto-20211117121607-2067 status is  but expect it to be exited
	I1117 12:26:50.091528   14833 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:50.736020   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:50.845493   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:50.845532   14833 oci.go:668] temporary error verifying shutdown: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:50.845541   14833 oci.go:670] temporary error: container auto-20211117121607-2067 status is  but expect it to be exited
	I1117 12:26:50.845563   14833 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:51.955768   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:52.064692   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:52.064734   14833 oci.go:668] temporary error verifying shutdown: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:52.064743   14833 oci.go:670] temporary error: container auto-20211117121607-2067 status is  but expect it to be exited
	I1117 12:26:52.064764   14833 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:53.584602   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:53.687237   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:53.687276   14833 oci.go:668] temporary error verifying shutdown: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:53.687285   14833 oci.go:670] temporary error: container auto-20211117121607-2067 status is  but expect it to be exited
	I1117 12:26:53.687303   14833 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:56.736475   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:26:56.841276   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:26:56.841331   14833 oci.go:668] temporary error verifying shutdown: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:26:56.841343   14833 oci.go:670] temporary error: container auto-20211117121607-2067 status is  but expect it to be exited
	I1117 12:26:56.841366   14833 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:02.632225   14833 cli_runner.go:115] Run: docker container inspect auto-20211117121607-2067 --format={{.State.Status}}
	W1117 12:27:02.735551   14833 cli_runner.go:162] docker container inspect auto-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:02.735596   14833 oci.go:668] temporary error verifying shutdown: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:02.735604   14833 oci.go:670] temporary error: container auto-20211117121607-2067 status is  but expect it to be exited
	I1117 12:27:02.735629   14833 oci.go:87] couldn't shut down auto-20211117121607-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "auto-20211117121607-2067": docker container inspect auto-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	 
	I1117 12:27:02.735724   14833 cli_runner.go:115] Run: docker rm -f -v auto-20211117121607-2067
	I1117 12:27:02.835591   14833 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20211117121607-2067
	W1117 12:27:02.936882   14833 cli_runner.go:162] docker container inspect -f {{.Id}} auto-20211117121607-2067 returned with exit code 1
	I1117 12:27:02.936998   14833 cli_runner.go:115] Run: docker network inspect auto-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:27:03.036070   14833 cli_runner.go:115] Run: docker network rm auto-20211117121607-2067
	I1117 12:27:06.467425   14833 cli_runner.go:168] Completed: docker network rm auto-20211117121607-2067: (3.431337483s)
	W1117 12:27:06.467698   14833 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:27:06.467705   14833 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:27:07.468017   14833 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:27:07.495561   14833 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:27:07.495802   14833 start.go:160] libmachine.API.Create for "auto-20211117121607-2067" (driver="docker")
	I1117 12:27:07.495846   14833 client.go:168] LocalClient.Create starting
	I1117 12:27:07.496066   14833 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:27:07.496149   14833 main.go:130] libmachine: Decoding PEM data...
	I1117 12:27:07.496179   14833 main.go:130] libmachine: Parsing certificate...
	I1117 12:27:07.496274   14833 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:27:07.496330   14833 main.go:130] libmachine: Decoding PEM data...
	I1117 12:27:07.496350   14833 main.go:130] libmachine: Parsing certificate...
	I1117 12:27:07.497164   14833 cli_runner.go:115] Run: docker network inspect auto-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:27:07.601886   14833 cli_runner.go:162] docker network inspect auto-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:27:07.601990   14833 network_create.go:254] running [docker network inspect auto-20211117121607-2067] to gather additional debugging logs...
	I1117 12:27:07.602006   14833 cli_runner.go:115] Run: docker network inspect auto-20211117121607-2067
	W1117 12:27:07.701400   14833 cli_runner.go:162] docker network inspect auto-20211117121607-2067 returned with exit code 1
	I1117 12:27:07.701424   14833 network_create.go:257] error running [docker network inspect auto-20211117121607-2067]: docker network inspect auto-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20211117121607-2067
	I1117 12:27:07.701439   14833 network_create.go:259] output of [docker network inspect auto-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20211117121607-2067
	
	** /stderr **
	I1117 12:27:07.701522   14833 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:27:07.801036   14833 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000692070] amended:false}} dirty:map[] misses:0}
	I1117 12:27:07.801067   14833 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:27:07.801245   14833 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000692070] amended:true}} dirty:map[192.168.49.0:0xc000692070 192.168.58.0:0xc0001125b8] misses:0}
	I1117 12:27:07.801256   14833 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:27:07.801263   14833 network_create.go:106] attempt to create docker network auto-20211117121607-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:27:07.801339   14833 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20211117121607-2067
	I1117 12:27:12.568413   14833 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20211117121607-2067: (4.767057771s)
	I1117 12:27:12.568434   14833 network_create.go:90] docker network auto-20211117121607-2067 192.168.58.0/24 created
	I1117 12:27:12.568444   14833 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20211117121607-2067" container
	I1117 12:27:12.568561   14833 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:27:12.667647   14833 cli_runner.go:115] Run: docker volume create auto-20211117121607-2067 --label name.minikube.sigs.k8s.io=auto-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:27:12.769235   14833 oci.go:102] Successfully created a docker volume auto-20211117121607-2067
	I1117 12:27:12.769352   14833 cli_runner.go:115] Run: docker run --rm --name auto-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20211117121607-2067 --entrypoint /usr/bin/test -v auto-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:27:13.163410   14833 oci.go:106] Successfully prepared a docker volume auto-20211117121607-2067
	E1117 12:27:13.163456   14833 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:27:13.163467   14833 client.go:171] LocalClient.Create took 5.667650128s
	I1117 12:27:13.163477   14833 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:27:13.163495   14833 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:27:13.163600   14833 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:27:15.164641   14833 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:27:15.164739   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:27:15.309886   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:27:15.310044   14833 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:15.493931   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:27:15.612204   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:27:15.612295   14833 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:15.942751   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:27:16.082164   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:27:16.082243   14833 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:16.551718   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:27:16.669736   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	W1117 12:27:16.669830   14833 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	
	W1117 12:27:16.669844   14833 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:16.669855   14833 start.go:129] duration metric: createHost completed in 9.201873134s
	I1117 12:27:16.669928   14833 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:27:16.669987   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:27:16.791364   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:27:16.791442   14833 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:16.988719   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:27:17.116736   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:27:17.116812   14833 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:17.421110   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:27:17.549725   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	I1117 12:27:17.549831   14833 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:18.222334   14833 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067
	W1117 12:27:18.334573   14833 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067 returned with exit code 1
	W1117 12:27:18.334661   14833 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	
	W1117 12:27:18.334677   14833 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-20211117121607-2067
	I1117 12:27:18.334687   14833 fix.go:57] fixHost completed within 31.577948862s
	I1117 12:27:18.334695   14833 start.go:80] releasing machines lock for "auto-20211117121607-2067", held for 31.578008416s
	W1117 12:27:18.334852   14833 out.go:241] * Failed to start docker container. Running "minikube delete -p auto-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p auto-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:27:18.394435   14833 out.go:176] 
	W1117 12:27:18.394560   14833 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:27:18.394570   14833 out.go:241] * 
	* 
	W1117 12:27:18.395137   14833 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:27:18.557087   14833 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (47.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (55.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : exit status 80 (55.133306224s)

                                                
                                                
-- stdout --
	* [false-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node false-20211117121608-2067 in cluster false-20211117121608-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "false-20211117121608-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:27:24.102953   15101 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:27:24.103086   15101 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:27:24.103095   15101 out.go:310] Setting ErrFile to fd 2...
	I1117 12:27:24.103099   15101 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:27:24.103172   15101 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:27:24.103481   15101 out.go:304] Setting JSON to false
	I1117 12:27:24.127740   15101 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3419,"bootTime":1637177425,"procs":320,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:27:24.127838   15101 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:27:24.155092   15101 out.go:176] * [false-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:27:24.155273   15101 notify.go:174] Checking for updates...
	I1117 12:27:24.202659   15101 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:27:24.228784   15101 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:27:24.254601   15101 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:27:24.280745   15101 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:27:24.281518   15101 config.go:176] Loaded profile config "cert-expiration-20211117122341-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:27:24.281685   15101 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:27:24.281746   15101 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:27:24.372916   15101 docker.go:132] docker version: linux-20.10.5
	I1117 12:27:24.373075   15101 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:27:24.525718   15101 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:27:24.477473571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:27:24.573181   15101 out.go:176] * Using the docker driver based on user configuration
	I1117 12:27:24.573236   15101 start.go:280] selected driver: docker
	I1117 12:27:24.573248   15101 start.go:775] validating driver "docker" against <nil>
	I1117 12:27:24.573282   15101 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:27:24.576717   15101 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:27:24.729238   15101 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:27:24.680329967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:27:24.729345   15101 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:27:24.729461   15101 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:27:24.729479   15101 cni.go:93] Creating CNI manager for "false"
	I1117 12:27:24.729486   15101 start_flags.go:282] config:
	{Name:false-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:false-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:27:24.756535   15101 out.go:176] * Starting control plane node false-20211117121608-2067 in cluster false-20211117121608-2067
	I1117 12:27:24.756599   15101 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:27:24.783014   15101 out.go:176] * Pulling base image ...
	I1117 12:27:24.783125   15101 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:27:24.783191   15101 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:27:24.783240   15101 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:27:24.783275   15101 cache.go:57] Caching tarball of preloaded images
	I1117 12:27:24.783526   15101 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:27:24.783554   15101 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:27:24.784555   15101 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/false-20211117121608-2067/config.json ...
	I1117 12:27:24.784711   15101 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/false-20211117121608-2067/config.json: {Name:mk2a6298a1c3af7966fefc4ff147ecac498c6c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:27:24.901470   15101 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:27:24.901491   15101 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:27:24.901504   15101 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:27:24.901544   15101 start.go:313] acquiring machines lock for false-20211117121608-2067: {Name:mkde6ad0d1956e36498cc7d4893d8e0ace22abda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:27:24.901685   15101 start.go:317] acquired machines lock for "false-20211117121608-2067" in 126.447µs
	I1117 12:27:24.901712   15101 start.go:89] Provisioning new machine with config: &{Name:false-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:false-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:27:24.901784   15101 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:27:24.928711   15101 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:27:24.929037   15101 start.go:160] libmachine.API.Create for "false-20211117121608-2067" (driver="docker")
	I1117 12:27:24.929113   15101 client.go:168] LocalClient.Create starting
	I1117 12:27:24.929265   15101 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:27:24.950211   15101 main.go:130] libmachine: Decoding PEM data...
	I1117 12:27:24.950257   15101 main.go:130] libmachine: Parsing certificate...
	I1117 12:27:24.950394   15101 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:27:24.950469   15101 main.go:130] libmachine: Decoding PEM data...
	I1117 12:27:24.950491   15101 main.go:130] libmachine: Parsing certificate...
	I1117 12:27:24.951520   15101 cli_runner.go:115] Run: docker network inspect false-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:27:25.055052   15101 cli_runner.go:162] docker network inspect false-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:27:25.055155   15101 network_create.go:254] running [docker network inspect false-20211117121608-2067] to gather additional debugging logs...
	I1117 12:27:25.055171   15101 cli_runner.go:115] Run: docker network inspect false-20211117121608-2067
	W1117 12:27:25.156326   15101 cli_runner.go:162] docker network inspect false-20211117121608-2067 returned with exit code 1
	I1117 12:27:25.156348   15101 network_create.go:257] error running [docker network inspect false-20211117121608-2067]: docker network inspect false-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20211117121608-2067
	I1117 12:27:25.156363   15101 network_create.go:259] output of [docker network inspect false-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20211117121608-2067
	
	** /stderr **
	I1117 12:27:25.156449   15101 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:27:25.261868   15101 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005b4220] misses:0}
	I1117 12:27:25.261908   15101 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:27:25.261925   15101 network_create.go:106] attempt to create docker network false-20211117121608-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:27:25.262005   15101 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117121608-2067
	I1117 12:27:30.107168   15101 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117121608-2067: (4.845132303s)
	I1117 12:27:30.107197   15101 network_create.go:90] docker network false-20211117121608-2067 192.168.49.0/24 created
	I1117 12:27:30.107235   15101 kic.go:106] calculated static IP "192.168.49.2" for the "false-20211117121608-2067" container
	I1117 12:27:30.107351   15101 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:27:30.208749   15101 cli_runner.go:115] Run: docker volume create false-20211117121608-2067 --label name.minikube.sigs.k8s.io=false-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:27:30.310381   15101 oci.go:102] Successfully created a docker volume false-20211117121608-2067
	I1117 12:27:30.310504   15101 cli_runner.go:115] Run: docker run --rm --name false-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20211117121608-2067 --entrypoint /usr/bin/test -v false-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:27:30.780532   15101 oci.go:106] Successfully prepared a docker volume false-20211117121608-2067
	E1117 12:27:30.780600   15101 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:27:30.780602   15101 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:27:30.780624   15101 client.go:171] LocalClient.Create took 5.851537596s
	I1117 12:27:30.780632   15101 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:27:30.780753   15101 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:27:32.785496   15101 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:27:32.785601   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:27:32.909776   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:27:32.909867   15101 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:33.186249   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:27:33.311472   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:27:33.311547   15101 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:33.858804   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:27:33.975218   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:27:33.975295   15101 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:34.633096   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:27:34.904604   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	W1117 12:27:34.904702   15101 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	
	W1117 12:27:34.904749   15101 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:34.904766   15101 start.go:129] duration metric: createHost completed in 10.003036581s
	I1117 12:27:34.904775   15101 start.go:80] releasing machines lock for "false-20211117121608-2067", held for 10.003146467s
	W1117 12:27:34.904792   15101 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:27:34.905348   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:35.027815   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:35.027857   15101 delete.go:82] Unable to get host status for false-20211117121608-2067, assuming it has already been deleted: state: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	W1117 12:27:35.028001   15101 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:27:35.028018   15101 start.go:547] Will try again in 5 seconds ...
	I1117 12:27:36.897295   15101 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.116546595s)
	I1117 12:27:36.897317   15101 kic.go:188] duration metric: took 6.116725 seconds to extract preloaded images to volume
	I1117 12:27:40.033764   15101 start.go:313] acquiring machines lock for false-20211117121608-2067: {Name:mkde6ad0d1956e36498cc7d4893d8e0ace22abda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:27:40.033851   15101 start.go:317] acquired machines lock for "false-20211117121608-2067" in 71.977µs
	I1117 12:27:40.033873   15101 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:27:40.033881   15101 fix.go:55] fixHost starting: 
	I1117 12:27:40.034143   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:40.252140   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:40.252205   15101 fix.go:108] recreateIfNeeded on false-20211117121608-2067: state= err=unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:40.252225   15101 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:27:40.278060   15101 out.go:176] * docker "false-20211117121608-2067" container is missing, will recreate.
	I1117 12:27:40.278101   15101 delete.go:124] DEMOLISHING false-20211117121608-2067 ...
	I1117 12:27:40.278272   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:40.396414   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:27:40.396460   15101 stop.go:75] unable to get state: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:40.396492   15101 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:40.396916   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:40.507154   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:40.507196   15101 delete.go:82] Unable to get host status for false-20211117121608-2067, assuming it has already been deleted: state: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:40.507306   15101 cli_runner.go:115] Run: docker container inspect -f {{.Id}} false-20211117121608-2067
	W1117 12:27:40.629619   15101 cli_runner.go:162] docker container inspect -f {{.Id}} false-20211117121608-2067 returned with exit code 1
	I1117 12:27:40.629646   15101 kic.go:360] could not find the container false-20211117121608-2067 to remove it. will try anyways
	I1117 12:27:40.629718   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:40.741550   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:27:40.741598   15101 oci.go:83] error getting container status, will try to delete anyways: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:40.741700   15101 cli_runner.go:115] Run: docker exec --privileged -t false-20211117121608-2067 /bin/bash -c "sudo init 0"
	W1117 12:27:40.855176   15101 cli_runner.go:162] docker exec --privileged -t false-20211117121608-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:27:40.855201   15101 oci.go:656] error shutdown false-20211117121608-2067: docker exec --privileged -t false-20211117121608-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:41.862414   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:41.971858   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:41.971901   15101 oci.go:668] temporary error verifying shutdown: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:41.971909   15101 oci.go:670] temporary error: container false-20211117121608-2067 status is  but expect it to be exited
	I1117 12:27:41.971931   15101 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:42.435270   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:42.541662   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:42.541704   15101 oci.go:668] temporary error verifying shutdown: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:42.541715   15101 oci.go:670] temporary error: container false-20211117121608-2067 status is  but expect it to be exited
	I1117 12:27:42.541738   15101 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:43.435255   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:43.535319   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:43.535360   15101 oci.go:668] temporary error verifying shutdown: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:43.535371   15101 oci.go:670] temporary error: container false-20211117121608-2067 status is  but expect it to be exited
	I1117 12:27:43.535393   15101 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:44.182106   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:44.307802   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:44.307842   15101 oci.go:668] temporary error verifying shutdown: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:44.307853   15101 oci.go:670] temporary error: container false-20211117121608-2067 status is  but expect it to be exited
	I1117 12:27:44.307875   15101 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:45.425681   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:45.527758   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:45.527797   15101 oci.go:668] temporary error verifying shutdown: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:45.527805   15101 oci.go:670] temporary error: container false-20211117121608-2067 status is  but expect it to be exited
	I1117 12:27:45.527828   15101 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:47.048517   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:47.150917   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:47.150964   15101 oci.go:668] temporary error verifying shutdown: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:47.150977   15101 oci.go:670] temporary error: container false-20211117121608-2067 status is  but expect it to be exited
	I1117 12:27:47.151005   15101 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:50.201741   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:50.302411   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:50.302471   15101 oci.go:668] temporary error verifying shutdown: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:50.302481   15101 oci.go:670] temporary error: container false-20211117121608-2067 status is  but expect it to be exited
	I1117 12:27:50.302510   15101 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:56.087282   15101 cli_runner.go:115] Run: docker container inspect false-20211117121608-2067 --format={{.State.Status}}
	W1117 12:27:56.190257   15101 cli_runner.go:162] docker container inspect false-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:27:56.190296   15101 oci.go:668] temporary error verifying shutdown: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:27:56.190305   15101 oci.go:670] temporary error: container false-20211117121608-2067 status is  but expect it to be exited
	I1117 12:27:56.190332   15101 oci.go:87] couldn't shut down false-20211117121608-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "false-20211117121608-2067": docker container inspect false-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	 
	I1117 12:27:56.190419   15101 cli_runner.go:115] Run: docker rm -f -v false-20211117121608-2067
	I1117 12:27:56.291372   15101 cli_runner.go:115] Run: docker container inspect -f {{.Id}} false-20211117121608-2067
	W1117 12:27:56.392781   15101 cli_runner.go:162] docker container inspect -f {{.Id}} false-20211117121608-2067 returned with exit code 1
	I1117 12:27:56.392897   15101 cli_runner.go:115] Run: docker network inspect false-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:27:56.493164   15101 cli_runner.go:115] Run: docker network rm false-20211117121608-2067
	I1117 12:28:00.824144   15101 cli_runner.go:168] Completed: docker network rm false-20211117121608-2067: (4.330960453s)
	W1117 12:28:00.824426   15101 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:28:00.824433   15101 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:28:01.824835   15101 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:28:01.851756   15101 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:28:01.851843   15101 start.go:160] libmachine.API.Create for "false-20211117121608-2067" (driver="docker")
	I1117 12:28:01.851860   15101 client.go:168] LocalClient.Create starting
	I1117 12:28:01.851952   15101 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:28:01.851999   15101 main.go:130] libmachine: Decoding PEM data...
	I1117 12:28:01.852015   15101 main.go:130] libmachine: Parsing certificate...
	I1117 12:28:01.852093   15101 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:28:01.872167   15101 main.go:130] libmachine: Decoding PEM data...
	I1117 12:28:01.872207   15101 main.go:130] libmachine: Parsing certificate...
	I1117 12:28:01.873696   15101 cli_runner.go:115] Run: docker network inspect false-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:28:01.975049   15101 cli_runner.go:162] docker network inspect false-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:28:01.975217   15101 network_create.go:254] running [docker network inspect false-20211117121608-2067] to gather additional debugging logs...
	I1117 12:28:01.975238   15101 cli_runner.go:115] Run: docker network inspect false-20211117121608-2067
	W1117 12:28:02.075650   15101 cli_runner.go:162] docker network inspect false-20211117121608-2067 returned with exit code 1
	I1117 12:28:02.075679   15101 network_create.go:257] error running [docker network inspect false-20211117121608-2067]: docker network inspect false-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20211117121608-2067
	I1117 12:28:02.075704   15101 network_create.go:259] output of [docker network inspect false-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20211117121608-2067
	
	** /stderr **
	I1117 12:28:02.075810   15101 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:28:02.175762   15101 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4220] amended:false}} dirty:map[] misses:0}
	I1117 12:28:02.175799   15101 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:28:02.175968   15101 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4220] amended:true}} dirty:map[192.168.49.0:0xc0005b4220 192.168.58.0:0xc0005b4020] misses:0}
	I1117 12:28:02.175979   15101 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:28:02.175986   15101 network_create.go:106] attempt to create docker network false-20211117121608-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:28:02.176067   15101 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117121608-2067
	W1117 12:28:02.276355   15101 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117121608-2067 returned with exit code 1
	W1117 12:28:02.276398   15101 network_create.go:98] failed to create docker network false-20211117121608-2067 192.168.58.0/24, will retry: subnet is taken
	I1117 12:28:02.276621   15101 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4220] amended:true}} dirty:map[192.168.49.0:0xc0005b4220 192.168.58.0:0xc0005b4020] misses:1}
	I1117 12:28:02.276641   15101 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:28:02.276813   15101 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b4220] amended:true}} dirty:map[192.168.49.0:0xc0005b4220 192.168.58.0:0xc0005b4020 192.168.67.0:0xc00069a110] misses:1}
	I1117 12:28:02.276824   15101 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:28:02.276832   15101 network_create.go:106] attempt to create docker network false-20211117121608-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:28:02.276907   15101 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117121608-2067
	I1117 12:28:13.230758   15101 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20211117121608-2067: (10.953865925s)
	I1117 12:28:13.230781   15101 network_create.go:90] docker network false-20211117121608-2067 192.168.67.0/24 created
	I1117 12:28:13.230797   15101 kic.go:106] calculated static IP "192.168.67.2" for the "false-20211117121608-2067" container
	I1117 12:28:13.230901   15101 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:28:13.333348   15101 cli_runner.go:115] Run: docker volume create false-20211117121608-2067 --label name.minikube.sigs.k8s.io=false-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:28:13.435687   15101 oci.go:102] Successfully created a docker volume false-20211117121608-2067
	I1117 12:28:13.435831   15101 cli_runner.go:115] Run: docker run --rm --name false-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20211117121608-2067 --entrypoint /usr/bin/test -v false-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:28:13.826312   15101 oci.go:106] Successfully prepared a docker volume false-20211117121608-2067
	E1117 12:28:13.826361   15101 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:28:13.826383   15101 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:28:13.826384   15101 client.go:171] LocalClient.Create took 11.9745928s
	I1117 12:28:13.826410   15101 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:28:13.826518   15101 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:28:15.830602   15101 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:28:15.830703   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:28:15.946708   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:28:15.946815   15101 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:28:16.131055   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:28:16.263348   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:28:16.263464   15101 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:28:16.601350   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:28:16.756135   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:28:16.756230   15101 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:28:17.225333   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:28:17.343306   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	W1117 12:28:17.343424   15101 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	
	W1117 12:28:17.343454   15101 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:28:17.343492   15101 start.go:129] duration metric: createHost completed in 15.51873412s
	I1117 12:28:17.343600   15101 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:28:17.343665   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:28:17.461580   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:28:17.461673   15101 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:28:17.657686   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:28:17.776842   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:28:17.776932   15101 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:28:18.078008   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:28:18.194186   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	I1117 12:28:18.194270   15101 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:28:18.859085   15101 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067
	W1117 12:28:18.987064   15101 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067 returned with exit code 1
	W1117 12:28:18.987158   15101 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	
	W1117 12:28:18.987183   15101 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-20211117121608-2067
	I1117 12:28:18.987195   15101 fix.go:57] fixHost completed within 38.953555452s
	I1117 12:28:18.987207   15101 start.go:80] releasing machines lock for "false-20211117121608-2067", held for 38.95359102s
	W1117 12:28:18.987366   15101 out.go:241] * Failed to start docker container. Running "minikube delete -p false-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p false-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:28:19.067176   15101 out.go:176] 
	W1117 12:28:19.067292   15101 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:28:19.067304   15101 out.go:241] * 
	* 
	W1117 12:28:19.067875   15101 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:28:19.179392   15101 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (55.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (49.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cilium-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : exit status 80 (49.929622567s)

                                                
                                                
-- stdout --
	* [cilium-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node cilium-20211117121608-2067 in cluster cilium-20211117121608-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20211117121608-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:28:28.585766   15607 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:28:28.585976   15607 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:28:28.585981   15607 out.go:310] Setting ErrFile to fd 2...
	I1117 12:28:28.585984   15607 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:28:28.586060   15607 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:28:28.586368   15607 out.go:304] Setting JSON to false
	I1117 12:28:28.610105   15607 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3483,"bootTime":1637177425,"procs":321,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:28:28.610197   15607 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:28:28.637630   15607 out.go:176] * [cilium-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:28:28.637869   15607 notify.go:174] Checking for updates...
	I1117 12:28:28.686035   15607 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:28:28.712025   15607 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:28:28.738029   15607 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:28:28.763828   15607 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:28:28.764354   15607 config.go:176] Loaded profile config "cert-expiration-20211117122341-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:28:28.764513   15607 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:28:28.764547   15607 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:28:28.855994   15607 docker.go:132] docker version: linux-20.10.5
	I1117 12:28:28.856130   15607 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:28:29.011450   15607 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:28:28.964356982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:28:29.038827   15607 out.go:176] * Using the docker driver based on user configuration
	I1117 12:28:29.038942   15607 start.go:280] selected driver: docker
	I1117 12:28:29.038958   15607 start.go:775] validating driver "docker" against <nil>
	I1117 12:28:29.038979   15607 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:28:29.042366   15607 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:28:29.197061   15607 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:28:29.152079378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:28:29.197193   15607 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:28:29.197302   15607 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:28:29.197319   15607 cni.go:93] Creating CNI manager for "cilium"
	I1117 12:28:29.197326   15607 start_flags.go:277] Found "Cilium" CNI - setting NetworkPlugin=cni
	I1117 12:28:29.197337   15607 start_flags.go:282] config:
	{Name:cilium-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:cilium-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:28:29.223498   15607 out.go:176] * Starting control plane node cilium-20211117121608-2067 in cluster cilium-20211117121608-2067
	I1117 12:28:29.223573   15607 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:28:29.250015   15607 out.go:176] * Pulling base image ...
	I1117 12:28:29.250135   15607 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:28:29.250159   15607 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:28:29.250249   15607 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:28:29.250285   15607 cache.go:57] Caching tarball of preloaded images
	I1117 12:28:29.250552   15607 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:28:29.250572   15607 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:28:29.251738   15607 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/cilium-20211117121608-2067/config.json ...
	I1117 12:28:29.251912   15607 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/cilium-20211117121608-2067/config.json: {Name:mkc7f8fe84a0216cc5bd199e283c2ba608b54170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:28:29.389132   15607 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:28:29.389155   15607 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:28:29.389167   15607 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:28:29.389206   15607 start.go:313] acquiring machines lock for cilium-20211117121608-2067: {Name:mk3271f47eb17f71ffe7e087f23b6dd2d7613411 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:28:29.389335   15607 start.go:317] acquired machines lock for "cilium-20211117121608-2067" in 117.462µs
	I1117 12:28:29.389363   15607 start.go:89] Provisioning new machine with config: &{Name:cilium-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:cilium-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:28:29.389413   15607 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:28:29.416445   15607 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:28:29.416846   15607 start.go:160] libmachine.API.Create for "cilium-20211117121608-2067" (driver="docker")
	I1117 12:28:29.416889   15607 client.go:168] LocalClient.Create starting
	I1117 12:28:29.417078   15607 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:28:29.417162   15607 main.go:130] libmachine: Decoding PEM data...
	I1117 12:28:29.417196   15607 main.go:130] libmachine: Parsing certificate...
	I1117 12:28:29.417308   15607 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:28:29.417360   15607 main.go:130] libmachine: Decoding PEM data...
	I1117 12:28:29.417376   15607 main.go:130] libmachine: Parsing certificate...
	I1117 12:28:29.418454   15607 cli_runner.go:115] Run: docker network inspect cilium-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:28:29.523211   15607 cli_runner.go:162] docker network inspect cilium-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:28:29.523330   15607 network_create.go:254] running [docker network inspect cilium-20211117121608-2067] to gather additional debugging logs...
	I1117 12:28:29.523350   15607 cli_runner.go:115] Run: docker network inspect cilium-20211117121608-2067
	W1117 12:28:29.625194   15607 cli_runner.go:162] docker network inspect cilium-20211117121608-2067 returned with exit code 1
	I1117 12:28:29.625224   15607 network_create.go:257] error running [docker network inspect cilium-20211117121608-2067]: docker network inspect cilium-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20211117121608-2067
	I1117 12:28:29.625239   15607 network_create.go:259] output of [docker network inspect cilium-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20211117121608-2067
	
	** /stderr **
	I1117 12:28:29.625340   15607 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:28:29.727771   15607 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000024120] misses:0}
	I1117 12:28:29.727814   15607 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:28:29.727831   15607 network_create.go:106] attempt to create docker network cilium-20211117121608-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:28:29.727907   15607 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20211117121608-2067
	I1117 12:28:34.530050   15607 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20211117121608-2067: (4.802126399s)
	I1117 12:28:34.530076   15607 network_create.go:90] docker network cilium-20211117121608-2067 192.168.49.0/24 created
	I1117 12:28:34.530090   15607 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20211117121608-2067" container
	I1117 12:28:34.530196   15607 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:28:34.631106   15607 cli_runner.go:115] Run: docker volume create cilium-20211117121608-2067 --label name.minikube.sigs.k8s.io=cilium-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:28:34.735345   15607 oci.go:102] Successfully created a docker volume cilium-20211117121608-2067
	I1117 12:28:34.735491   15607 cli_runner.go:115] Run: docker run --rm --name cilium-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20211117121608-2067 --entrypoint /usr/bin/test -v cilium-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:28:35.229067   15607 oci.go:106] Successfully prepared a docker volume cilium-20211117121608-2067
	E1117 12:28:35.229119   15607 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:28:35.229127   15607 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:28:35.229143   15607 client.go:171] LocalClient.Create took 5.812282475s
	I1117 12:28:35.229153   15607 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:28:35.229261   15607 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:28:37.234796   15607 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:28:37.234921   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:28:37.360305   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:28:37.360399   15607 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:37.637010   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:28:37.758640   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:28:37.758725   15607 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:38.301977   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:28:38.420250   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:28:38.420356   15607 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:39.084740   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:28:39.206101   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	W1117 12:28:39.206182   15607 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	
	W1117 12:28:39.206223   15607 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:39.206235   15607 start.go:129] duration metric: createHost completed in 9.816878056s
	I1117 12:28:39.206241   15607 start.go:80] releasing machines lock for "cilium-20211117121608-2067", held for 9.816960706s
	W1117 12:28:39.206257   15607 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:28:39.206769   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:39.334894   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:39.334953   15607 delete.go:82] Unable to get host status for cilium-20211117121608-2067, assuming it has already been deleted: state: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	W1117 12:28:39.335101   15607 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:28:39.335114   15607 start.go:547] Will try again in 5 seconds ...
	I1117 12:28:41.177149   15607 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.94733561s)
	I1117 12:28:41.177174   15607 kic.go:188] duration metric: took 5.948057 seconds to extract preloaded images to volume
	I1117 12:28:44.335225   15607 start.go:313] acquiring machines lock for cilium-20211117121608-2067: {Name:mk3271f47eb17f71ffe7e087f23b6dd2d7613411 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:28:44.335360   15607 start.go:317] acquired machines lock for "cilium-20211117121608-2067" in 113.969µs
	I1117 12:28:44.335385   15607 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:28:44.335393   15607 fix.go:55] fixHost starting: 
	I1117 12:28:44.335672   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:44.461868   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:44.461926   15607 fix.go:108] recreateIfNeeded on cilium-20211117121608-2067: state= err=unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:44.461949   15607 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:28:44.488597   15607 out.go:176] * docker "cilium-20211117121608-2067" container is missing, will recreate.
	I1117 12:28:44.488631   15607 delete.go:124] DEMOLISHING cilium-20211117121608-2067 ...
	I1117 12:28:44.488751   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:44.610524   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:28:44.610572   15607 stop.go:75] unable to get state: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:44.610591   15607 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:44.611014   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:44.730270   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:44.730368   15607 delete.go:82] Unable to get host status for cilium-20211117121608-2067, assuming it has already been deleted: state: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:44.730514   15607 cli_runner.go:115] Run: docker container inspect -f {{.Id}} cilium-20211117121608-2067
	W1117 12:28:44.851485   15607 cli_runner.go:162] docker container inspect -f {{.Id}} cilium-20211117121608-2067 returned with exit code 1
	I1117 12:28:44.851517   15607 kic.go:360] could not find the container cilium-20211117121608-2067 to remove it. will try anyways
	I1117 12:28:44.851618   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:45.072848   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:28:45.072895   15607 oci.go:83] error getting container status, will try to delete anyways: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:45.072998   15607 cli_runner.go:115] Run: docker exec --privileged -t cilium-20211117121608-2067 /bin/bash -c "sudo init 0"
	W1117 12:28:45.187776   15607 cli_runner.go:162] docker exec --privileged -t cilium-20211117121608-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:28:45.187812   15607 oci.go:656] error shutdown cilium-20211117121608-2067: docker exec --privileged -t cilium-20211117121608-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:46.188012   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:46.309182   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:46.309232   15607 oci.go:668] temporary error verifying shutdown: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:46.309240   15607 oci.go:670] temporary error: container cilium-20211117121608-2067 status is  but expect it to be exited
	I1117 12:28:46.309265   15607 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:46.776107   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:46.891075   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:46.891121   15607 oci.go:668] temporary error verifying shutdown: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:46.891129   15607 oci.go:670] temporary error: container cilium-20211117121608-2067 status is  but expect it to be exited
	I1117 12:28:46.891150   15607 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:47.784741   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:47.908659   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:47.908788   15607 oci.go:668] temporary error verifying shutdown: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:47.908835   15607 oci.go:670] temporary error: container cilium-20211117121608-2067 status is  but expect it to be exited
	I1117 12:28:47.908905   15607 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:48.551123   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:48.673315   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:48.673356   15607 oci.go:668] temporary error verifying shutdown: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:48.673371   15607 oci.go:670] temporary error: container cilium-20211117121608-2067 status is  but expect it to be exited
	I1117 12:28:48.673394   15607 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:49.786515   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:49.915214   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:49.915272   15607 oci.go:668] temporary error verifying shutdown: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:49.915295   15607 oci.go:670] temporary error: container cilium-20211117121608-2067 status is  but expect it to be exited
	I1117 12:28:49.915328   15607 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:51.434666   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:51.540506   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:51.540553   15607 oci.go:668] temporary error verifying shutdown: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:51.540562   15607 oci.go:670] temporary error: container cilium-20211117121608-2067 status is  but expect it to be exited
	I1117 12:28:51.540584   15607 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:54.584642   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:28:54.695010   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:28:54.695050   15607 oci.go:668] temporary error verifying shutdown: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:28:54.695057   15607 oci.go:670] temporary error: container cilium-20211117121608-2067 status is  but expect it to be exited
	I1117 12:28:54.695098   15607 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:00.484598   15607 cli_runner.go:115] Run: docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:00.584087   15607 cli_runner.go:162] docker container inspect cilium-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:00.584127   15607 oci.go:668] temporary error verifying shutdown: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:00.584135   15607 oci.go:670] temporary error: container cilium-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:00.584166   15607 oci.go:87] couldn't shut down cilium-20211117121608-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "cilium-20211117121608-2067": docker container inspect cilium-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	 
	I1117 12:29:00.584258   15607 cli_runner.go:115] Run: docker rm -f -v cilium-20211117121608-2067
	I1117 12:29:00.686434   15607 cli_runner.go:115] Run: docker container inspect -f {{.Id}} cilium-20211117121608-2067
	W1117 12:29:00.786818   15607 cli_runner.go:162] docker container inspect -f {{.Id}} cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:00.786940   15607 cli_runner.go:115] Run: docker network inspect cilium-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:29:00.890155   15607 cli_runner.go:162] docker network inspect cilium-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:29:00.890285   15607 network_create.go:254] running [docker network inspect cilium-20211117121608-2067] to gather additional debugging logs...
	I1117 12:29:00.890307   15607 cli_runner.go:115] Run: docker network inspect cilium-20211117121608-2067
	W1117 12:29:00.991097   15607 cli_runner.go:162] docker network inspect cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:00.991128   15607 network_create.go:257] error running [docker network inspect cilium-20211117121608-2067]: docker network inspect cilium-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20211117121608-2067
	I1117 12:29:00.991145   15607 network_create.go:259] output of [docker network inspect cilium-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20211117121608-2067
	
	** /stderr **
	W1117 12:29:00.991411   15607 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:29:00.991417   15607 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:29:02.000985   15607 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:29:02.049613   15607 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:29:02.049710   15607 start.go:160] libmachine.API.Create for "cilium-20211117121608-2067" (driver="docker")
	I1117 12:29:02.049731   15607 client.go:168] LocalClient.Create starting
	I1117 12:29:02.049868   15607 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:29:02.049922   15607 main.go:130] libmachine: Decoding PEM data...
	I1117 12:29:02.049939   15607 main.go:130] libmachine: Parsing certificate...
	I1117 12:29:02.050003   15607 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:29:02.050046   15607 main.go:130] libmachine: Decoding PEM data...
	I1117 12:29:02.050058   15607 main.go:130] libmachine: Parsing certificate...
	I1117 12:29:02.050536   15607 cli_runner.go:115] Run: docker network inspect cilium-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:29:02.150661   15607 cli_runner.go:162] docker network inspect cilium-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:29:02.150763   15607 network_create.go:254] running [docker network inspect cilium-20211117121608-2067] to gather additional debugging logs...
	I1117 12:29:02.150783   15607 cli_runner.go:115] Run: docker network inspect cilium-20211117121608-2067
	W1117 12:29:02.250862   15607 cli_runner.go:162] docker network inspect cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:02.250886   15607 network_create.go:257] error running [docker network inspect cilium-20211117121608-2067]: docker network inspect cilium-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20211117121608-2067
	I1117 12:29:02.250905   15607 network_create.go:259] output of [docker network inspect cilium-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20211117121608-2067
	
	** /stderr **
	I1117 12:29:02.251001   15607 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:29:02.353041   15607 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000024120] amended:false}} dirty:map[] misses:0}
	I1117 12:29:02.353085   15607 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:29:02.353272   15607 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000024120] amended:true}} dirty:map[192.168.49.0:0xc000024120 192.168.58.0:0xc000186348] misses:0}
	I1117 12:29:02.353284   15607 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:29:02.353292   15607 network_create.go:106] attempt to create docker network cilium-20211117121608-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:29:02.353372   15607 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20211117121608-2067
	I1117 12:29:12.634600   15607 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20211117121608-2067: (10.281248747s)
	I1117 12:29:12.634623   15607 network_create.go:90] docker network cilium-20211117121608-2067 192.168.58.0/24 created
	I1117 12:29:12.634637   15607 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20211117121608-2067" container
	I1117 12:29:12.634745   15607 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:29:12.735538   15607 cli_runner.go:115] Run: docker volume create cilium-20211117121608-2067 --label name.minikube.sigs.k8s.io=cilium-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:29:12.836620   15607 oci.go:102] Successfully created a docker volume cilium-20211117121608-2067
	I1117 12:29:12.836750   15607 cli_runner.go:115] Run: docker run --rm --name cilium-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20211117121608-2067 --entrypoint /usr/bin/test -v cilium-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:29:13.231167   15607 oci.go:106] Successfully prepared a docker volume cilium-20211117121608-2067
	E1117 12:29:13.231215   15607 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:29:13.231226   15607 client.go:171] LocalClient.Create took 11.181559882s
	I1117 12:29:13.231235   15607 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:29:13.231254   15607 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:29:13.231376   15607 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:29:15.237045   15607 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:29:15.237166   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:29:15.362206   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:15.362324   15607 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:15.550963   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:29:15.670572   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:15.670751   15607 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:16.001266   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:29:16.113630   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:16.113709   15607 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:16.577225   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:29:16.695089   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	W1117 12:29:16.695185   15607 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	
	W1117 12:29:16.695202   15607 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:16.695222   15607 start.go:129] duration metric: createHost completed in 14.694306256s
	I1117 12:29:16.695292   15607 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:29:16.695351   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:29:16.812612   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:16.812698   15607 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:17.011006   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:29:17.132195   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:17.132283   15607 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:17.435615   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:29:17.553648   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	I1117 12:29:17.553748   15607 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:18.226329   15607 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067
	W1117 12:29:18.344012   15607 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067 returned with exit code 1
	W1117 12:29:18.344102   15607 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	
	W1117 12:29:18.344117   15607 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "cilium-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cilium-20211117121608-2067
	I1117 12:29:18.344128   15607 fix.go:57] fixHost completed within 34.00894731s
	I1117 12:29:18.344142   15607 start.go:80] releasing machines lock for "cilium-20211117121608-2067", held for 34.008984556s
	W1117 12:29:18.344296   15607 out.go:241] * Failed to start docker container. Running "minikube delete -p cilium-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cilium-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:29:18.391972   15607 out.go:176] 
	W1117 12:29:18.392123   15607 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:29:18.392145   15607 out.go:241] * 
	* 
	W1117 12:29:18.392780   15607 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:29:18.470137   15607 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (49.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (48.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 80 (48.96088612s)

                                                
                                                
-- stdout --
	* [calico-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node calico-20211117121608-2067 in cluster calico-20211117121608-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20211117121608-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:28:58.625521   15886 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:28:58.625712   15886 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:28:58.625717   15886 out.go:310] Setting ErrFile to fd 2...
	I1117 12:28:58.625720   15886 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:28:58.625802   15886 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:28:58.626116   15886 out.go:304] Setting JSON to false
	I1117 12:28:58.653331   15886 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3513,"bootTime":1637177425,"procs":321,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:28:58.653428   15886 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:28:58.679574   15886 out.go:176] * [calico-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:28:58.679681   15886 notify.go:174] Checking for updates...
	I1117 12:28:58.727414   15886 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:28:58.753612   15886 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:28:58.779604   15886 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:28:58.805167   15886 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:28:58.805598   15886 config.go:176] Loaded profile config "cilium-20211117121608-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:28:58.805683   15886 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:28:58.805719   15886 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:28:58.894648   15886 docker.go:132] docker version: linux-20.10.5
	I1117 12:28:58.894776   15886 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:28:59.046366   15886 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:28:59.000725284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:28:59.095047   15886 out.go:176] * Using the docker driver based on user configuration
	I1117 12:28:59.095140   15886 start.go:280] selected driver: docker
	I1117 12:28:59.095156   15886 start.go:775] validating driver "docker" against <nil>
	I1117 12:28:59.095179   15886 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:28:59.098629   15886 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:28:59.247163   15886 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:28:59.204368498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:28:59.247260   15886 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:28:59.247384   15886 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:28:59.247400   15886 cni.go:93] Creating CNI manager for "calico"
	I1117 12:28:59.247408   15886 start_flags.go:277] Found "Calico" CNI - setting NetworkPlugin=cni
	I1117 12:28:59.247416   15886 start_flags.go:282] config:
	{Name:calico-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:calico-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:28:59.274489   15886 out.go:176] * Starting control plane node calico-20211117121608-2067 in cluster calico-20211117121608-2067
	I1117 12:28:59.274565   15886 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:28:59.322246   15886 out.go:176] * Pulling base image ...
	I1117 12:28:59.322372   15886 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:28:59.322413   15886 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:28:59.322472   15886 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:28:59.322504   15886 cache.go:57] Caching tarball of preloaded images
	I1117 12:28:59.322740   15886 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:28:59.323539   15886 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:28:59.323889   15886 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/calico-20211117121608-2067/config.json ...
	I1117 12:28:59.323983   15886 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/calico-20211117121608-2067/config.json: {Name:mk2e630fb1a43f81e5af1116a147e8b474efd09c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:28:59.438185   15886 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:28:59.438210   15886 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:28:59.438222   15886 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:28:59.438268   15886 start.go:313] acquiring machines lock for calico-20211117121608-2067: {Name:mk57ad0854fe169ab0fe6ecdbbebbb1a0da904d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:28:59.438401   15886 start.go:317] acquired machines lock for "calico-20211117121608-2067" in 121.048µs
	I1117 12:28:59.438426   15886 start.go:89] Provisioning new machine with config: &{Name:calico-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:calico-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:28:59.438569   15886 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:28:59.487110   15886 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:28:59.487516   15886 start.go:160] libmachine.API.Create for "calico-20211117121608-2067" (driver="docker")
	I1117 12:28:59.487562   15886 client.go:168] LocalClient.Create starting
	I1117 12:28:59.487744   15886 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:28:59.487819   15886 main.go:130] libmachine: Decoding PEM data...
	I1117 12:28:59.487864   15886 main.go:130] libmachine: Parsing certificate...
	I1117 12:28:59.487977   15886 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:28:59.488031   15886 main.go:130] libmachine: Decoding PEM data...
	I1117 12:28:59.488045   15886 main.go:130] libmachine: Parsing certificate...
	I1117 12:28:59.489046   15886 cli_runner.go:115] Run: docker network inspect calico-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:28:59.591536   15886 cli_runner.go:162] docker network inspect calico-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:28:59.591649   15886 network_create.go:254] running [docker network inspect calico-20211117121608-2067] to gather additional debugging logs...
	I1117 12:28:59.591668   15886 cli_runner.go:115] Run: docker network inspect calico-20211117121608-2067
	W1117 12:28:59.691759   15886 cli_runner.go:162] docker network inspect calico-20211117121608-2067 returned with exit code 1
	I1117 12:28:59.691781   15886 network_create.go:257] error running [docker network inspect calico-20211117121608-2067]: docker network inspect calico-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20211117121608-2067
	I1117 12:28:59.691804   15886 network_create.go:259] output of [docker network inspect calico-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20211117121608-2067
	
	** /stderr **
	I1117 12:28:59.691898   15886 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:28:59.792525   15886 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000ee78] misses:0}
	I1117 12:28:59.792562   15886 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:28:59.792577   15886 network_create.go:106] attempt to create docker network calico-20211117121608-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:28:59.792652   15886 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211117121608-2067
	I1117 12:29:04.785166   15886 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211117121608-2067: (4.992502011s)
	I1117 12:29:04.785200   15886 network_create.go:90] docker network calico-20211117121608-2067 192.168.49.0/24 created
	I1117 12:29:04.785215   15886 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20211117121608-2067" container
	I1117 12:29:04.785330   15886 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:29:04.884321   15886 cli_runner.go:115] Run: docker volume create calico-20211117121608-2067 --label name.minikube.sigs.k8s.io=calico-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:29:04.984897   15886 oci.go:102] Successfully created a docker volume calico-20211117121608-2067
	I1117 12:29:04.985026   15886 cli_runner.go:115] Run: docker run --rm --name calico-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211117121608-2067 --entrypoint /usr/bin/test -v calico-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:29:05.477812   15886 oci.go:106] Successfully prepared a docker volume calico-20211117121608-2067
	I1117 12:29:05.477876   15886 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	E1117 12:29:05.477875   15886 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:29:05.477905   15886 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:29:05.477905   15886 client.go:171] LocalClient.Create took 5.990371916s
	I1117 12:29:05.478004   15886 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:29:07.484878   15886 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:29:07.485016   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:07.627139   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:07.627247   15886 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:07.904197   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:08.022595   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:08.022698   15886 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:08.563494   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:08.687002   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:08.687076   15886 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:09.351003   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:09.480063   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	W1117 12:29:09.480147   15886 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	
	W1117 12:29:09.480210   15886 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:09.480220   15886 start.go:129] duration metric: createHost completed in 10.041706104s
	I1117 12:29:09.480228   15886 start.go:80] releasing machines lock for "calico-20211117121608-2067", held for 10.04188282s
	W1117 12:29:09.480282   15886 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:29:09.481032   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:09.610721   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:09.610765   15886 delete.go:82] Unable to get host status for calico-20211117121608-2067, assuming it has already been deleted: state: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	W1117 12:29:09.610914   15886 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:29:09.610925   15886 start.go:547] Will try again in 5 seconds ...
	I1117 12:29:11.732871   15886 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.254862783s)
	I1117 12:29:11.732897   15886 kic.go:188] duration metric: took 6.255032 seconds to extract preloaded images to volume
	I1117 12:29:14.611278   15886 start.go:313] acquiring machines lock for calico-20211117121608-2067: {Name:mk57ad0854fe169ab0fe6ecdbbebbb1a0da904d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:29:14.611393   15886 start.go:317] acquired machines lock for "calico-20211117121608-2067" in 94.58µs
	I1117 12:29:14.611429   15886 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:29:14.611441   15886 fix.go:55] fixHost starting: 
	I1117 12:29:14.611765   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:14.733045   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:14.733112   15886 fix.go:108] recreateIfNeeded on calico-20211117121608-2067: state= err=unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:14.733147   15886 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:29:14.761774   15886 out.go:176] * docker "calico-20211117121608-2067" container is missing, will recreate.
	I1117 12:29:14.761792   15886 delete.go:124] DEMOLISHING calico-20211117121608-2067 ...
	I1117 12:29:14.761929   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:14.880012   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:29:14.880064   15886 stop.go:75] unable to get state: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:14.880082   15886 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:14.880548   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:15.001576   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:15.001629   15886 delete.go:82] Unable to get host status for calico-20211117121608-2067, assuming it has already been deleted: state: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:15.001733   15886 cli_runner.go:115] Run: docker container inspect -f {{.Id}} calico-20211117121608-2067
	W1117 12:29:15.123237   15886 cli_runner.go:162] docker container inspect -f {{.Id}} calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:15.123277   15886 kic.go:360] could not find the container calico-20211117121608-2067 to remove it. will try anyways
	I1117 12:29:15.123395   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:15.245923   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:29:15.245968   15886 oci.go:83] error getting container status, will try to delete anyways: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:15.246088   15886 cli_runner.go:115] Run: docker exec --privileged -t calico-20211117121608-2067 /bin/bash -c "sudo init 0"
	W1117 12:29:15.373103   15886 cli_runner.go:162] docker exec --privileged -t calico-20211117121608-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:29:15.373145   15886 oci.go:656] error shutdown calico-20211117121608-2067: docker exec --privileged -t calico-20211117121608-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:16.376982   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:16.492334   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:16.492404   15886 oci.go:668] temporary error verifying shutdown: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:16.492426   15886 oci.go:670] temporary error: container calico-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:16.492466   15886 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:16.955094   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:17.076653   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:17.076699   15886 oci.go:668] temporary error verifying shutdown: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:17.076709   15886 oci.go:670] temporary error: container calico-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:17.076732   15886 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:17.975981   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:18.094103   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:18.094150   15886 oci.go:668] temporary error verifying shutdown: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:18.094169   15886 oci.go:670] temporary error: container calico-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:18.094194   15886 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:18.734404   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:18.839844   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:18.839883   15886 oci.go:668] temporary error verifying shutdown: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:18.839893   15886 oci.go:670] temporary error: container calico-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:18.839914   15886 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:19.951775   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:20.063392   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:20.063443   15886 oci.go:668] temporary error verifying shutdown: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:20.063458   15886 oci.go:670] temporary error: container calico-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:20.063486   15886 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:21.584472   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:21.686775   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:21.686814   15886 oci.go:668] temporary error verifying shutdown: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:21.686825   15886 oci.go:670] temporary error: container calico-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:21.686845   15886 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:24.734364   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:24.833911   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:24.833951   15886 oci.go:668] temporary error verifying shutdown: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:24.833960   15886 oci.go:670] temporary error: container calico-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:24.833983   15886 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:30.625842   15886 cli_runner.go:115] Run: docker container inspect calico-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:30.723791   15886 cli_runner.go:162] docker container inspect calico-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:30.723829   15886 oci.go:668] temporary error verifying shutdown: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:30.723839   15886 oci.go:670] temporary error: container calico-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:30.723863   15886 oci.go:87] couldn't shut down calico-20211117121608-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "calico-20211117121608-2067": docker container inspect calico-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	 
	I1117 12:29:30.723949   15886 cli_runner.go:115] Run: docker rm -f -v calico-20211117121608-2067
	I1117 12:29:30.825148   15886 cli_runner.go:115] Run: docker container inspect -f {{.Id}} calico-20211117121608-2067
	W1117 12:29:30.928735   15886 cli_runner.go:162] docker container inspect -f {{.Id}} calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:30.928856   15886 cli_runner.go:115] Run: docker network inspect calico-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:29:31.028952   15886 cli_runner.go:162] docker network inspect calico-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:29:31.029055   15886 network_create.go:254] running [docker network inspect calico-20211117121608-2067] to gather additional debugging logs...
	I1117 12:29:31.029072   15886 cli_runner.go:115] Run: docker network inspect calico-20211117121608-2067
	W1117 12:29:31.130403   15886 cli_runner.go:162] docker network inspect calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:31.130429   15886 network_create.go:257] error running [docker network inspect calico-20211117121608-2067]: docker network inspect calico-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20211117121608-2067
	I1117 12:29:31.130445   15886 network_create.go:259] output of [docker network inspect calico-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20211117121608-2067
	
	** /stderr **
	W1117 12:29:31.130711   15886 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:29:31.130718   15886 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:29:32.134255   15886 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:29:32.181779   15886 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:29:32.181948   15886 start.go:160] libmachine.API.Create for "calico-20211117121608-2067" (driver="docker")
	I1117 12:29:32.181978   15886 client.go:168] LocalClient.Create starting
	I1117 12:29:32.182179   15886 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:29:32.182263   15886 main.go:130] libmachine: Decoding PEM data...
	I1117 12:29:32.182294   15886 main.go:130] libmachine: Parsing certificate...
	I1117 12:29:32.182390   15886 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:29:32.182465   15886 main.go:130] libmachine: Decoding PEM data...
	I1117 12:29:32.182483   15886 main.go:130] libmachine: Parsing certificate...
	I1117 12:29:32.183525   15886 cli_runner.go:115] Run: docker network inspect calico-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:29:32.285255   15886 cli_runner.go:162] docker network inspect calico-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:29:32.285427   15886 network_create.go:254] running [docker network inspect calico-20211117121608-2067] to gather additional debugging logs...
	I1117 12:29:32.285445   15886 cli_runner.go:115] Run: docker network inspect calico-20211117121608-2067
	W1117 12:29:32.387085   15886 cli_runner.go:162] docker network inspect calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:32.387110   15886 network_create.go:257] error running [docker network inspect calico-20211117121608-2067]: docker network inspect calico-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20211117121608-2067
	I1117 12:29:32.387124   15886 network_create.go:259] output of [docker network inspect calico-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20211117121608-2067
	
	** /stderr **
	I1117 12:29:32.387222   15886 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:29:32.486505   15886 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000ee78] amended:false}} dirty:map[] misses:0}
	I1117 12:29:32.486541   15886 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:29:32.486719   15886 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000ee78] amended:true}} dirty:map[192.168.49.0:0xc00000ee78 192.168.58.0:0xc0001122e0] misses:0}
	I1117 12:29:32.486732   15886 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:29:32.486740   15886 network_create.go:106] attempt to create docker network calico-20211117121608-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:29:32.486834   15886 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211117121608-2067
	I1117 12:29:41.565414   15886 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211117121608-2067: (9.078593591s)
	I1117 12:29:41.565441   15886 network_create.go:90] docker network calico-20211117121608-2067 192.168.58.0/24 created
	I1117 12:29:41.565455   15886 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20211117121608-2067" container
	I1117 12:29:41.565555   15886 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:29:41.666442   15886 cli_runner.go:115] Run: docker volume create calico-20211117121608-2067 --label name.minikube.sigs.k8s.io=calico-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:29:41.768473   15886 oci.go:102] Successfully created a docker volume calico-20211117121608-2067
	I1117 12:29:41.768594   15886 cli_runner.go:115] Run: docker run --rm --name calico-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211117121608-2067 --entrypoint /usr/bin/test -v calico-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:29:42.185456   15886 oci.go:106] Successfully prepared a docker volume calico-20211117121608-2067
	E1117 12:29:42.185505   15886 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:29:42.185516   15886 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:29:42.185517   15886 client.go:171] LocalClient.Create took 10.003594778s
	I1117 12:29:42.185533   15886 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:29:42.185650   15886 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:29:44.191848   15886 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:29:44.192023   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:44.346924   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:44.347058   15886 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:44.534735   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:44.669844   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:44.669936   15886 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:45.007138   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:45.132673   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:45.132792   15886 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:45.600757   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:45.717113   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	W1117 12:29:45.717189   15886 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	
	W1117 12:29:45.717211   15886 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:45.717221   15886 start.go:129] duration metric: createHost completed in 13.583033249s
	I1117 12:29:45.717284   15886 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:29:45.717348   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:45.839169   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:45.839258   15886 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:46.035329   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:46.164408   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:46.164491   15886 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:46.464325   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:46.599903   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	I1117 12:29:46.600023   15886 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:47.263601   15886 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067
	W1117 12:29:47.381977   15886 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067 returned with exit code 1
	W1117 12:29:47.382062   15886 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	
	W1117 12:29:47.382086   15886 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-20211117121608-2067
	I1117 12:29:47.382094   15886 fix.go:57] fixHost completed within 32.770857474s
	I1117 12:29:47.382108   15886 start.go:80] releasing machines lock for "calico-20211117121608-2067", held for 32.770908757s
	W1117 12:29:47.382248   15886 out.go:241] * Failed to start docker container. Running "minikube delete -p calico-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p calico-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:29:47.429701   15886 out.go:176] 
	W1117 12:29:47.429808   15886 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:29:47.429816   15886 out.go:241] * 
	* 
	W1117 12:29:47.430377   15886 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:29:47.507655   15886 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (48.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (49.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p custom-weave-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : exit status 80 (49.639750109s)

                                                
                                                
-- stdout --
	* [custom-weave-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node custom-weave-20211117121608-2067 in cluster custom-weave-20211117121608-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "custom-weave-20211117121608-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:29:27.788664   16161 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:29:27.788874   16161 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:29:27.788879   16161 out.go:310] Setting ErrFile to fd 2...
	I1117 12:29:27.788882   16161 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:29:27.788950   16161 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:29:27.789255   16161 out.go:304] Setting JSON to false
	I1117 12:29:27.816880   16161 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3542,"bootTime":1637177425,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:29:27.816986   16161 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:29:27.843883   16161 out.go:176] * [custom-weave-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:29:27.844081   16161 notify.go:174] Checking for updates...
	I1117 12:29:27.891631   16161 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:29:27.917389   16161 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:29:27.943218   16161 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:29:27.969412   16161 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:29:27.969836   16161 config.go:176] Loaded profile config "calico-20211117121608-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:29:27.969922   16161 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:29:27.969958   16161 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:29:28.059987   16161 docker.go:132] docker version: linux-20.10.5
	I1117 12:29:28.060138   16161 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:29:28.211509   16161 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:29:28.16604177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:29:28.238624   16161 out.go:176] * Using the docker driver based on user configuration
	I1117 12:29:28.238701   16161 start.go:280] selected driver: docker
	I1117 12:29:28.238718   16161 start.go:775] validating driver "docker" against <nil>
	I1117 12:29:28.238742   16161 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:29:28.242604   16161 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:29:28.392104   16161 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:29:28.347018345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:29:28.392198   16161 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:29:28.392317   16161 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:29:28.392333   16161 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I1117 12:29:28.392348   16161 start_flags.go:277] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I1117 12:29:28.392356   16161 start_flags.go:282] config:
	{Name:custom-weave-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:custom-weave-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:29:28.419240   16161 out.go:176] * Starting control plane node custom-weave-20211117121608-2067 in cluster custom-weave-20211117121608-2067
	I1117 12:29:28.419341   16161 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:29:28.444987   16161 out.go:176] * Pulling base image ...
	I1117 12:29:28.445075   16161 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:29:28.445125   16161 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:29:28.445161   16161 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:29:28.445184   16161 cache.go:57] Caching tarball of preloaded images
	I1117 12:29:28.445434   16161 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:29:28.445466   16161 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:29:28.446438   16161 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/custom-weave-20211117121608-2067/config.json ...
	I1117 12:29:28.446600   16161 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/custom-weave-20211117121608-2067/config.json: {Name:mk3cad65d09f6d882b76a86690b3c7022bc8aa5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:29:28.559508   16161 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:29:28.559529   16161 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:29:28.559544   16161 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:29:28.559608   16161 start.go:313] acquiring machines lock for custom-weave-20211117121608-2067: {Name:mk49be788ed95d8c930ff23cb75c5a0ddbb7adcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:29:28.559769   16161 start.go:317] acquired machines lock for "custom-weave-20211117121608-2067" in 148.587µs
	I1117 12:29:28.559801   16161 start.go:89] Provisioning new machine with config: &{Name:custom-weave-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:custom-weave-20211117121608-2067 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:29:28.559860   16161 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:29:28.612591   16161 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:29:28.613007   16161 start.go:160] libmachine.API.Create for "custom-weave-20211117121608-2067" (driver="docker")
	I1117 12:29:28.613069   16161 client.go:168] LocalClient.Create starting
	I1117 12:29:28.613271   16161 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:29:28.633637   16161 main.go:130] libmachine: Decoding PEM data...
	I1117 12:29:28.633685   16161 main.go:130] libmachine: Parsing certificate...
	I1117 12:29:28.633775   16161 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:29:28.633850   16161 main.go:130] libmachine: Decoding PEM data...
	I1117 12:29:28.633877   16161 main.go:130] libmachine: Parsing certificate...
	I1117 12:29:28.635053   16161 cli_runner.go:115] Run: docker network inspect custom-weave-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:29:28.735999   16161 cli_runner.go:162] docker network inspect custom-weave-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:29:28.736100   16161 network_create.go:254] running [docker network inspect custom-weave-20211117121608-2067] to gather additional debugging logs...
	I1117 12:29:28.736115   16161 cli_runner.go:115] Run: docker network inspect custom-weave-20211117121608-2067
	W1117 12:29:28.836719   16161 cli_runner.go:162] docker network inspect custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:29:28.836744   16161 network_create.go:257] error running [docker network inspect custom-weave-20211117121608-2067]: docker network inspect custom-weave-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20211117121608-2067
	I1117 12:29:28.836763   16161 network_create.go:259] output of [docker network inspect custom-weave-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20211117121608-2067
	
	** /stderr **
	I1117 12:29:28.836853   16161 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:29:28.939849   16161 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000f562c0] misses:0}
	I1117 12:29:28.939894   16161 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:29:28.939917   16161 network_create.go:106] attempt to create docker network custom-weave-20211117121608-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:29:28.940000   16161 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20211117121608-2067
	I1117 12:29:34.475516   16161 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20211117121608-2067: (5.535499935s)
	I1117 12:29:34.475547   16161 network_create.go:90] docker network custom-weave-20211117121608-2067 192.168.49.0/24 created
	I1117 12:29:34.475565   16161 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20211117121608-2067" container
	I1117 12:29:34.475689   16161 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:29:34.573988   16161 cli_runner.go:115] Run: docker volume create custom-weave-20211117121608-2067 --label name.minikube.sigs.k8s.io=custom-weave-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:29:34.674142   16161 oci.go:102] Successfully created a docker volume custom-weave-20211117121608-2067
	I1117 12:29:34.674303   16161 cli_runner.go:115] Run: docker run --rm --name custom-weave-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20211117121608-2067 --entrypoint /usr/bin/test -v custom-weave-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:29:35.135477   16161 oci.go:106] Successfully prepared a docker volume custom-weave-20211117121608-2067
	I1117 12:29:35.135528   16161 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	E1117 12:29:35.135536   16161 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:29:35.135549   16161 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:29:35.135559   16161 client.go:171] LocalClient.Create took 6.522522085s
	I1117 12:29:35.135657   16161 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:29:37.135960   16161 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:29:37.136062   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:29:37.259466   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:29:37.259638   16161 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:37.536173   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:29:37.660443   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:29:37.660543   16161 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:38.206205   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:29:38.320793   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:29:38.320884   16161 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:38.984316   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:29:39.109740   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	W1117 12:29:39.109855   16161 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	
	W1117 12:29:39.109888   16161 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:39.109897   16161 start.go:129] duration metric: createHost completed in 10.550098305s
	I1117 12:29:39.109904   16161 start.go:80] releasing machines lock for "custom-weave-20211117121608-2067", held for 10.550194296s
	W1117 12:29:39.109942   16161 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:29:39.110488   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:39.232417   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:39.232466   16161 delete.go:82] Unable to get host status for custom-weave-20211117121608-2067, assuming it has already been deleted: state: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	W1117 12:29:39.232602   16161 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:29:39.232616   16161 start.go:547] Will try again in 5 seconds ...
	I1117 12:29:41.077259   16161 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.941576401s)
	I1117 12:29:41.077283   16161 kic.go:188] duration metric: took 5.941772 seconds to extract preloaded images to volume
	I1117 12:29:44.239705   16161 start.go:313] acquiring machines lock for custom-weave-20211117121608-2067: {Name:mk49be788ed95d8c930ff23cb75c5a0ddbb7adcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:29:44.239869   16161 start.go:317] acquired machines lock for "custom-weave-20211117121608-2067" in 135.924µs
	I1117 12:29:44.239907   16161 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:29:44.239920   16161 fix.go:55] fixHost starting: 
	I1117 12:29:44.240400   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:44.397449   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:44.397504   16161 fix.go:108] recreateIfNeeded on custom-weave-20211117121608-2067: state= err=unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:44.397533   16161 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:29:44.424297   16161 out.go:176] * docker "custom-weave-20211117121608-2067" container is missing, will recreate.
	I1117 12:29:44.424315   16161 delete.go:124] DEMOLISHING custom-weave-20211117121608-2067 ...
	I1117 12:29:44.424466   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:44.549925   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:29:44.549995   16161 stop.go:75] unable to get state: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:44.550035   16161 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:44.550740   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:44.681194   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:44.681249   16161 delete.go:82] Unable to get host status for custom-weave-20211117121608-2067, assuming it has already been deleted: state: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:44.681360   16161 cli_runner.go:115] Run: docker container inspect -f {{.Id}} custom-weave-20211117121608-2067
	W1117 12:29:44.796882   16161 cli_runner.go:162] docker container inspect -f {{.Id}} custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:29:44.796911   16161 kic.go:360] could not find the container custom-weave-20211117121608-2067 to remove it. will try anyways
	I1117 12:29:44.797002   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:44.918493   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:29:44.918555   16161 oci.go:83] error getting container status, will try to delete anyways: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:44.918673   16161 cli_runner.go:115] Run: docker exec --privileged -t custom-weave-20211117121608-2067 /bin/bash -c "sudo init 0"
	W1117 12:29:45.042620   16161 cli_runner.go:162] docker exec --privileged -t custom-weave-20211117121608-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:29:45.042648   16161 oci.go:656] error shutdown custom-weave-20211117121608-2067: docker exec --privileged -t custom-weave-20211117121608-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:46.051695   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:46.180461   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:46.180509   16161 oci.go:668] temporary error verifying shutdown: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:46.180528   16161 oci.go:670] temporary error: container custom-weave-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:46.180569   16161 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:46.651131   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:46.772665   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:46.772709   16161 oci.go:668] temporary error verifying shutdown: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:46.772719   16161 oci.go:670] temporary error: container custom-weave-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:46.772744   16161 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:47.663071   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:47.767107   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:47.825092   16161 oci.go:668] temporary error verifying shutdown: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:47.825114   16161 oci.go:670] temporary error: container custom-weave-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:47.825136   16161 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:48.461765   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:48.574907   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:48.574957   16161 oci.go:668] temporary error verifying shutdown: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:48.574969   16161 oci.go:670] temporary error: container custom-weave-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:48.574997   16161 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:49.685158   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:49.787486   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:49.787534   16161 oci.go:668] temporary error verifying shutdown: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:49.787546   16161 oci.go:670] temporary error: container custom-weave-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:49.787573   16161 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:51.300707   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:51.403317   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:51.403356   16161 oci.go:668] temporary error verifying shutdown: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:51.403366   16161 oci.go:670] temporary error: container custom-weave-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:51.403388   16161 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:54.450949   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:29:54.553420   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:29:54.553468   16161 oci.go:668] temporary error verifying shutdown: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:29:54.553480   16161 oci.go:670] temporary error: container custom-weave-20211117121608-2067 status is  but expect it to be exited
	I1117 12:29:54.553522   16161 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:00.340714   16161 cli_runner.go:115] Run: docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:00.444737   16161 cli_runner.go:162] docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:00.444777   16161 oci.go:668] temporary error verifying shutdown: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:00.444788   16161 oci.go:670] temporary error: container custom-weave-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:00.444814   16161 oci.go:87] couldn't shut down custom-weave-20211117121608-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "custom-weave-20211117121608-2067": docker container inspect custom-weave-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	 
	I1117 12:30:00.444904   16161 cli_runner.go:115] Run: docker rm -f -v custom-weave-20211117121608-2067
	I1117 12:30:00.546676   16161 cli_runner.go:115] Run: docker container inspect -f {{.Id}} custom-weave-20211117121608-2067
	W1117 12:30:00.647584   16161 cli_runner.go:162] docker container inspect -f {{.Id}} custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:00.647685   16161 cli_runner.go:115] Run: docker network inspect custom-weave-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:30:00.755315   16161 cli_runner.go:162] docker network inspect custom-weave-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:30:00.755418   16161 network_create.go:254] running [docker network inspect custom-weave-20211117121608-2067] to gather additional debugging logs...
	I1117 12:30:00.755441   16161 cli_runner.go:115] Run: docker network inspect custom-weave-20211117121608-2067
	W1117 12:30:00.857212   16161 cli_runner.go:162] docker network inspect custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:00.857238   16161 network_create.go:257] error running [docker network inspect custom-weave-20211117121608-2067]: docker network inspect custom-weave-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20211117121608-2067
	I1117 12:30:00.857250   16161 network_create.go:259] output of [docker network inspect custom-weave-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20211117121608-2067
	
	** /stderr **
	W1117 12:30:00.857497   16161 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:30:00.857503   16161 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:30:01.857650   16161 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:30:01.884535   16161 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:30:01.884687   16161 start.go:160] libmachine.API.Create for "custom-weave-20211117121608-2067" (driver="docker")
	I1117 12:30:01.884717   16161 client.go:168] LocalClient.Create starting
	I1117 12:30:01.884844   16161 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:30:01.884897   16161 main.go:130] libmachine: Decoding PEM data...
	I1117 12:30:01.884924   16161 main.go:130] libmachine: Parsing certificate...
	I1117 12:30:01.884997   16161 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:30:01.885037   16161 main.go:130] libmachine: Decoding PEM data...
	I1117 12:30:01.885057   16161 main.go:130] libmachine: Parsing certificate...
	I1117 12:30:01.885619   16161 cli_runner.go:115] Run: docker network inspect custom-weave-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:30:01.987329   16161 cli_runner.go:162] docker network inspect custom-weave-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:30:01.987439   16161 network_create.go:254] running [docker network inspect custom-weave-20211117121608-2067] to gather additional debugging logs...
	I1117 12:30:01.987462   16161 cli_runner.go:115] Run: docker network inspect custom-weave-20211117121608-2067
	W1117 12:30:02.088375   16161 cli_runner.go:162] docker network inspect custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:02.088405   16161 network_create.go:257] error running [docker network inspect custom-weave-20211117121608-2067]: docker network inspect custom-weave-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20211117121608-2067
	I1117 12:30:02.088419   16161 network_create.go:259] output of [docker network inspect custom-weave-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20211117121608-2067
	
	** /stderr **
	I1117 12:30:02.088521   16161 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:30:02.192307   16161 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f562c0] amended:false}} dirty:map[] misses:0}
	I1117 12:30:02.192340   16161 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:30:02.192530   16161 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000f562c0] amended:true}} dirty:map[192.168.49.0:0xc000f562c0 192.168.58.0:0xc000f56428] misses:0}
	I1117 12:30:02.192543   16161 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:30:02.192549   16161 network_create.go:106] attempt to create docker network custom-weave-20211117121608-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:30:02.192634   16161 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20211117121608-2067
	I1117 12:30:11.347695   16161 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20211117121608-2067: (9.155053793s)
	I1117 12:30:11.347717   16161 network_create.go:90] docker network custom-weave-20211117121608-2067 192.168.58.0/24 created
	I1117 12:30:11.347739   16161 kic.go:106] calculated static IP "192.168.58.2" for the "custom-weave-20211117121608-2067" container
	I1117 12:30:11.347850   16161 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:30:11.449682   16161 cli_runner.go:115] Run: docker volume create custom-weave-20211117121608-2067 --label name.minikube.sigs.k8s.io=custom-weave-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:30:11.552904   16161 oci.go:102] Successfully created a docker volume custom-weave-20211117121608-2067
	I1117 12:30:11.553036   16161 cli_runner.go:115] Run: docker run --rm --name custom-weave-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20211117121608-2067 --entrypoint /usr/bin/test -v custom-weave-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:30:11.944469   16161 oci.go:106] Successfully prepared a docker volume custom-weave-20211117121608-2067
	E1117 12:30:11.944511   16161 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:30:11.944522   16161 client.go:171] LocalClient.Create took 10.059863923s
	I1117 12:30:11.944523   16161 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:30:11.944545   16161 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:30:11.944665   16161 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:30:13.952807   16161 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:30:13.952950   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:30:14.093981   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:14.094093   16161 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:14.276459   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:30:14.418037   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:14.418119   16161 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:14.750688   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:30:14.868311   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:14.868389   16161 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:15.334065   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:30:15.452819   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	W1117 12:30:15.452928   16161 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	
	W1117 12:30:15.452948   16161 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:15.452958   16161 start.go:129] duration metric: createHost completed in 13.595379589s
	I1117 12:30:15.453020   16161 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:30:15.453095   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:30:15.573424   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:15.573527   16161 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:15.776372   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:30:15.894268   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:15.894356   16161 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:16.200665   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:30:16.329239   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	I1117 12:30:16.329369   16161 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:17.001154   16161 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067
	W1117 12:30:17.185282   16161 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067 returned with exit code 1
	W1117 12:30:17.185417   16161 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	
	W1117 12:30:17.185467   16161 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-weave-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-weave-20211117121608-2067
	I1117 12:30:17.185483   16161 fix.go:57] fixHost completed within 32.945768659s
	I1117 12:30:17.185492   16161 start.go:80] releasing machines lock for "custom-weave-20211117121608-2067", held for 32.94581634s
	W1117 12:30:17.185640   16161 out.go:241] * Failed to start docker container. Running "minikube delete -p custom-weave-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p custom-weave-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:30:17.233133   16161 out.go:176] 
	W1117 12:30:17.233257   16161 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:30:17.233271   16161 out.go:241] * 
	* 
	W1117 12:30:17.233980   16161 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:30:17.336200   16161 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (49.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20211117121607-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p enable-default-cni-20211117121607-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : exit status 80 (50.810504993s)

                                                
                                                
-- stdout --
	* [enable-default-cni-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node enable-default-cni-20211117121607-2067 in cluster enable-default-cni-20211117121607-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "enable-default-cni-20211117121607-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:29:56.753934   16435 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:29:56.754135   16435 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:29:56.754141   16435 out.go:310] Setting ErrFile to fd 2...
	I1117 12:29:56.754144   16435 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:29:56.754219   16435 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:29:56.754531   16435 out.go:304] Setting JSON to false
	I1117 12:29:56.778116   16435 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3571,"bootTime":1637177425,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:29:56.778203   16435 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:29:56.804912   16435 out.go:176] * [enable-default-cni-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:29:56.805125   16435 notify.go:174] Checking for updates...
	I1117 12:29:56.853602   16435 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:29:56.879622   16435 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:29:56.905657   16435 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:29:56.931595   16435 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:29:56.932121   16435 config.go:176] Loaded profile config "custom-weave-20211117121608-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:29:56.932206   16435 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:29:56.932239   16435 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:29:57.021531   16435 docker.go:132] docker version: linux-20.10.5
	I1117 12:29:57.021674   16435 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:29:57.174811   16435 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:29:57.127730442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:29:57.227488   16435 out.go:176] * Using the docker driver based on user configuration
	I1117 12:29:57.227530   16435 start.go:280] selected driver: docker
	I1117 12:29:57.227544   16435 start.go:775] validating driver "docker" against <nil>
	I1117 12:29:57.227570   16435 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:29:57.231002   16435 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:29:57.382251   16435 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:29:57.3355796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://inde
x.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:29:57.382356   16435 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	E1117 12:29:57.382470   16435 start_flags.go:399] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1117 12:29:57.382486   16435 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:29:57.382500   16435 cni.go:93] Creating CNI manager for "bridge"
	I1117 12:29:57.382508   16435 start_flags.go:277] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1117 12:29:57.382516   16435 start_flags.go:282] config:
	{Name:enable-default-cni-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:enable-default-cni-20211117121607-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:29:57.431071   16435 out.go:176] * Starting control plane node enable-default-cni-20211117121607-2067 in cluster enable-default-cni-20211117121607-2067
	I1117 12:29:57.431165   16435 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:29:57.457179   16435 out.go:176] * Pulling base image ...
	I1117 12:29:57.457290   16435 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:29:57.457357   16435 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:29:57.457366   16435 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:29:57.457398   16435 cache.go:57] Caching tarball of preloaded images
	I1117 12:29:57.457617   16435 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:29:57.457644   16435 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:29:57.458621   16435 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/enable-default-cni-20211117121607-2067/config.json ...
	I1117 12:29:57.458790   16435 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/enable-default-cni-20211117121607-2067/config.json: {Name:mk51492fb6fdadee1a3f06738765ec8c7eefd85f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:29:57.572008   16435 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:29:57.572035   16435 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:29:57.572048   16435 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:29:57.572102   16435 start.go:313] acquiring machines lock for enable-default-cni-20211117121607-2067: {Name:mk3d6201eabec31d69ac57dc3940aede3899511d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:29:57.572239   16435 start.go:317] acquired machines lock for "enable-default-cni-20211117121607-2067" in 125.538µs
	I1117 12:29:57.572266   16435 start.go:89] Provisioning new machine with config: &{Name:enable-default-cni-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:enable-default-cni-20211117121607-2067 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:29:57.572337   16435 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:29:57.620684   16435 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:29:57.621082   16435 start.go:160] libmachine.API.Create for "enable-default-cni-20211117121607-2067" (driver="docker")
	I1117 12:29:57.621139   16435 client.go:168] LocalClient.Create starting
	I1117 12:29:57.621339   16435 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:29:57.621426   16435 main.go:130] libmachine: Decoding PEM data...
	I1117 12:29:57.621460   16435 main.go:130] libmachine: Parsing certificate...
	I1117 12:29:57.621582   16435 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:29:57.621636   16435 main.go:130] libmachine: Decoding PEM data...
	I1117 12:29:57.621660   16435 main.go:130] libmachine: Parsing certificate...
	I1117 12:29:57.622720   16435 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:29:57.724325   16435 cli_runner.go:162] docker network inspect enable-default-cni-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:29:57.724419   16435 network_create.go:254] running [docker network inspect enable-default-cni-20211117121607-2067] to gather additional debugging logs...
	I1117 12:29:57.724435   16435 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117121607-2067
	W1117 12:29:57.842274   16435 cli_runner.go:162] docker network inspect enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:29:57.842300   16435 network_create.go:257] error running [docker network inspect enable-default-cni-20211117121607-2067]: docker network inspect enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20211117121607-2067
	I1117 12:29:57.842314   16435 network_create.go:259] output of [docker network inspect enable-default-cni-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20211117121607-2067
	
	** /stderr **
	I1117 12:29:57.842411   16435 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:29:57.944146   16435 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000ec90] misses:0}
	I1117 12:29:57.944182   16435 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:29:57.944198   16435 network_create.go:106] attempt to create docker network enable-default-cni-20211117121607-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:29:57.944273   16435 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20211117121607-2067
	I1117 12:30:03.881442   16435 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20211117121607-2067: (5.937167595s)
	I1117 12:30:03.881469   16435 network_create.go:90] docker network enable-default-cni-20211117121607-2067 192.168.49.0/24 created
	I1117 12:30:03.881496   16435 kic.go:106] calculated static IP "192.168.49.2" for the "enable-default-cni-20211117121607-2067" container
	I1117 12:30:03.881613   16435 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:30:03.980476   16435 cli_runner.go:115] Run: docker volume create enable-default-cni-20211117121607-2067 --label name.minikube.sigs.k8s.io=enable-default-cni-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:30:04.082538   16435 oci.go:102] Successfully created a docker volume enable-default-cni-20211117121607-2067
	I1117 12:30:04.082661   16435 cli_runner.go:115] Run: docker run --rm --name enable-default-cni-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20211117121607-2067 --entrypoint /usr/bin/test -v enable-default-cni-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:30:04.591349   16435 oci.go:106] Successfully prepared a docker volume enable-default-cni-20211117121607-2067
	E1117 12:30:04.591413   16435 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:30:04.591432   16435 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:30:04.591441   16435 client.go:171] LocalClient.Create took 6.970335785s
	I1117 12:30:04.591463   16435 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:30:04.591698   16435 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:30:06.600717   16435 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:30:06.600819   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:06.724239   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:06.748001   16435 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:07.034109   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:07.166259   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:07.166339   16435 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:07.706837   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:07.841610   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:07.841700   16435 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:08.501735   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:08.604197   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	W1117 12:30:08.604273   16435 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	
	W1117 12:30:08.604296   16435 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:08.604307   16435 start.go:129] duration metric: createHost completed in 11.032033803s
	I1117 12:30:08.604314   16435 start.go:80] releasing machines lock for "enable-default-cni-20211117121607-2067", held for 11.032136894s
	W1117 12:30:08.604329   16435 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:30:08.604771   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:08.706099   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:08.706144   16435 delete.go:82] Unable to get host status for enable-default-cni-20211117121607-2067, assuming it has already been deleted: state: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	W1117 12:30:08.706253   16435 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:30:08.706267   16435 start.go:547] Will try again in 5 seconds ...
	I1117 12:30:10.821760   16435 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.230039736s)
	I1117 12:30:10.822131   16435 kic.go:188] duration metric: took 6.230665 seconds to extract preloaded images to volume
	I1117 12:30:13.706804   16435 start.go:313] acquiring machines lock for enable-default-cni-20211117121607-2067: {Name:mk3d6201eabec31d69ac57dc3940aede3899511d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:30:13.706923   16435 start.go:317] acquired machines lock for "enable-default-cni-20211117121607-2067" in 95.169µs
	I1117 12:30:13.706946   16435 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:30:13.706954   16435 fix.go:55] fixHost starting: 
	I1117 12:30:13.707205   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:13.827530   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:13.827580   16435 fix.go:108] recreateIfNeeded on enable-default-cni-20211117121607-2067: state= err=unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:13.827599   16435 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:30:13.853291   16435 out.go:176] * docker "enable-default-cni-20211117121607-2067" container is missing, will recreate.
	I1117 12:30:13.853307   16435 delete.go:124] DEMOLISHING enable-default-cni-20211117121607-2067 ...
	I1117 12:30:13.853432   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:13.981779   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:30:13.981852   16435 stop.go:75] unable to get state: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:13.981876   16435 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:13.982407   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:14.119096   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:14.119152   16435 delete.go:82] Unable to get host status for enable-default-cni-20211117121607-2067, assuming it has already been deleted: state: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:14.119279   16435 cli_runner.go:115] Run: docker container inspect -f {{.Id}} enable-default-cni-20211117121607-2067
	W1117 12:30:14.239545   16435 cli_runner.go:162] docker container inspect -f {{.Id}} enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:14.239578   16435 kic.go:360] could not find the container enable-default-cni-20211117121607-2067 to remove it. will try anyways
	I1117 12:30:14.239685   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:14.376680   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:30:14.376759   16435 oci.go:83] error getting container status, will try to delete anyways: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:14.376889   16435 cli_runner.go:115] Run: docker exec --privileged -t enable-default-cni-20211117121607-2067 /bin/bash -c "sudo init 0"
	W1117 12:30:14.499115   16435 cli_runner.go:162] docker exec --privileged -t enable-default-cni-20211117121607-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:30:14.499140   16435 oci.go:656] error shutdown enable-default-cni-20211117121607-2067: docker exec --privileged -t enable-default-cni-20211117121607-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:15.503395   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:15.634520   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:15.634567   16435 oci.go:668] temporary error verifying shutdown: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:15.634577   16435 oci.go:670] temporary error: container enable-default-cni-20211117121607-2067 status is  but expect it to be exited
	I1117 12:30:15.634600   16435 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:16.100620   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:16.224045   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:16.224104   16435 oci.go:668] temporary error verifying shutdown: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:16.224126   16435 oci.go:670] temporary error: container enable-default-cni-20211117121607-2067 status is  but expect it to be exited
	I1117 12:30:16.224148   16435 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:17.115678   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:17.344536   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:17.344587   16435 oci.go:668] temporary error verifying shutdown: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:17.344603   16435 oci.go:670] temporary error: container enable-default-cni-20211117121607-2067 status is  but expect it to be exited
	I1117 12:30:17.344637   16435 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:17.984118   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:18.099549   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:18.099596   16435 oci.go:668] temporary error verifying shutdown: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:18.099604   16435 oci.go:670] temporary error: container enable-default-cni-20211117121607-2067 status is  but expect it to be exited
	I1117 12:30:18.099629   16435 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:19.209105   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:19.333643   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:19.333691   16435 oci.go:668] temporary error verifying shutdown: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:19.333706   16435 oci.go:670] temporary error: container enable-default-cni-20211117121607-2067 status is  but expect it to be exited
	I1117 12:30:19.333733   16435 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:20.850642   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:20.953051   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:20.953100   16435 oci.go:668] temporary error verifying shutdown: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:20.953110   16435 oci.go:670] temporary error: container enable-default-cni-20211117121607-2067 status is  but expect it to be exited
	I1117 12:30:20.953136   16435 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:24.000605   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:24.102028   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:24.102069   16435 oci.go:668] temporary error verifying shutdown: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:24.102076   16435 oci.go:670] temporary error: container enable-default-cni-20211117121607-2067 status is  but expect it to be exited
	I1117 12:30:24.102099   16435 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:29.884466   16435 cli_runner.go:115] Run: docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}
	W1117 12:30:29.984577   16435 cli_runner.go:162] docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:29.984619   16435 oci.go:668] temporary error verifying shutdown: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:29.984627   16435 oci.go:670] temporary error: container enable-default-cni-20211117121607-2067 status is  but expect it to be exited
	I1117 12:30:29.984655   16435 oci.go:87] couldn't shut down enable-default-cni-20211117121607-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "enable-default-cni-20211117121607-2067": docker container inspect enable-default-cni-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	 
	I1117 12:30:29.984736   16435 cli_runner.go:115] Run: docker rm -f -v enable-default-cni-20211117121607-2067
	I1117 12:30:30.086609   16435 cli_runner.go:115] Run: docker container inspect -f {{.Id}} enable-default-cni-20211117121607-2067
	W1117 12:30:30.188845   16435 cli_runner.go:162] docker container inspect -f {{.Id}} enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:30.188972   16435 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:30:30.290391   16435 cli_runner.go:162] docker network inspect enable-default-cni-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:30:30.290488   16435 network_create.go:254] running [docker network inspect enable-default-cni-20211117121607-2067] to gather additional debugging logs...
	I1117 12:30:30.290511   16435 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117121607-2067
	W1117 12:30:30.393485   16435 cli_runner.go:162] docker network inspect enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:30.393515   16435 network_create.go:257] error running [docker network inspect enable-default-cni-20211117121607-2067]: docker network inspect enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20211117121607-2067
	I1117 12:30:30.393527   16435 network_create.go:259] output of [docker network inspect enable-default-cni-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20211117121607-2067
	
	** /stderr **
	W1117 12:30:30.393796   16435 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:30:30.393803   16435 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:30:31.400985   16435 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:30:31.428103   16435 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:30:31.428185   16435 start.go:160] libmachine.API.Create for "enable-default-cni-20211117121607-2067" (driver="docker")
	I1117 12:30:31.428206   16435 client.go:168] LocalClient.Create starting
	I1117 12:30:31.428345   16435 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:30:31.428405   16435 main.go:130] libmachine: Decoding PEM data...
	I1117 12:30:31.428430   16435 main.go:130] libmachine: Parsing certificate...
	I1117 12:30:31.428493   16435 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:30:31.448851   16435 main.go:130] libmachine: Decoding PEM data...
	I1117 12:30:31.448898   16435 main.go:130] libmachine: Parsing certificate...
	I1117 12:30:31.449655   16435 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:30:31.559788   16435 cli_runner.go:162] docker network inspect enable-default-cni-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:30:31.559888   16435 network_create.go:254] running [docker network inspect enable-default-cni-20211117121607-2067] to gather additional debugging logs...
	I1117 12:30:31.559910   16435 cli_runner.go:115] Run: docker network inspect enable-default-cni-20211117121607-2067
	W1117 12:30:31.661560   16435 cli_runner.go:162] docker network inspect enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:31.661583   16435 network_create.go:257] error running [docker network inspect enable-default-cni-20211117121607-2067]: docker network inspect enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-20211117121607-2067
	I1117 12:30:31.661595   16435 network_create.go:259] output of [docker network inspect enable-default-cni-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-20211117121607-2067
	
	** /stderr **
	I1117 12:30:31.661687   16435 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:30:31.783885   16435 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000ec90] amended:false}} dirty:map[] misses:0}
	I1117 12:30:31.783924   16435 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:30:31.784136   16435 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000ec90] amended:true}} dirty:map[192.168.49.0:0xc00000ec90 192.168.58.0:0xc0006a2360] misses:0}
	I1117 12:30:31.784148   16435 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:30:31.784154   16435 network_create.go:106] attempt to create docker network enable-default-cni-20211117121607-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:30:31.784235   16435 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20211117121607-2067
	I1117 12:30:41.604744   16435 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true enable-default-cni-20211117121607-2067: (9.820509002s)
	I1117 12:30:41.604767   16435 network_create.go:90] docker network enable-default-cni-20211117121607-2067 192.168.58.0/24 created
	I1117 12:30:41.604783   16435 kic.go:106] calculated static IP "192.168.58.2" for the "enable-default-cni-20211117121607-2067" container
	I1117 12:30:41.605431   16435 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:30:41.708253   16435 cli_runner.go:115] Run: docker volume create enable-default-cni-20211117121607-2067 --label name.minikube.sigs.k8s.io=enable-default-cni-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:30:41.829833   16435 oci.go:102] Successfully created a docker volume enable-default-cni-20211117121607-2067
	I1117 12:30:41.829952   16435 cli_runner.go:115] Run: docker run --rm --name enable-default-cni-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-20211117121607-2067 --entrypoint /usr/bin/test -v enable-default-cni-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:30:42.243410   16435 oci.go:106] Successfully prepared a docker volume enable-default-cni-20211117121607-2067
	E1117 12:30:42.243460   16435 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:30:42.243472   16435 client.go:171] LocalClient.Create took 10.815328308s
	I1117 12:30:42.243475   16435 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:30:42.243498   16435 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:30:42.243638   16435 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:30:44.250463   16435 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:30:44.250564   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:44.379300   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:44.379381   16435 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:44.558270   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:44.676862   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:44.676978   16435 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:45.007718   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:45.129431   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:45.129512   16435 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:45.590074   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:45.714839   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	W1117 12:30:45.714933   16435 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	
	W1117 12:30:45.714960   16435 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:45.714974   16435 start.go:129] duration metric: createHost completed in 14.314027792s
	I1117 12:30:45.715046   16435 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:30:45.715109   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:45.840467   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:45.840556   16435 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:46.036871   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:46.156390   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:46.156479   16435 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:46.454132   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:46.593529   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	I1117 12:30:46.593608   16435 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:47.257238   16435 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067
	W1117 12:30:47.381583   16435 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067 returned with exit code 1
	W1117 12:30:47.381696   16435 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	
	W1117 12:30:47.381718   16435 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-20211117121607-2067
	I1117 12:30:47.381741   16435 fix.go:57] fixHost completed within 33.674996922s
	I1117 12:30:47.381750   16435 start.go:80] releasing machines lock for "enable-default-cni-20211117121607-2067", held for 33.675030162s
	W1117 12:30:47.381910   16435 out.go:241] * Failed to start docker container. Running "minikube delete -p enable-default-cni-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:30:47.430194   16435 out.go:176] 
	W1117 12:30:47.430334   16435 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:30:47.430345   16435 out.go:241] * 
	* 
	W1117 12:30:47.431031   16435 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:30:47.503262   16435 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (50.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20211117121608-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 80 (50.489824163s)

                                                
                                                
-- stdout --
	* [kindnet-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kindnet-20211117121608-2067 in cluster kindnet-20211117121608-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kindnet-20211117121608-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:30:26.554214   16726 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:30:26.554358   16726 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:30:26.554363   16726 out.go:310] Setting ErrFile to fd 2...
	I1117 12:30:26.554366   16726 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:30:26.554451   16726 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:30:26.554764   16726 out.go:304] Setting JSON to false
	I1117 12:30:26.580773   16726 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3601,"bootTime":1637177425,"procs":327,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:30:26.580873   16726 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:30:26.606956   16726 out.go:176] * [kindnet-20211117121608-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:30:26.607169   16726 notify.go:174] Checking for updates...
	I1117 12:30:26.655866   16726 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:30:26.681746   16726 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:30:26.707938   16726 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:30:26.734656   16726 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:30:26.735071   16726 config.go:176] Loaded profile config "enable-default-cni-20211117121607-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:30:26.760504   16726 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:30:26.760547   16726 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:30:26.850343   16726 docker.go:132] docker version: linux-20.10.5
	I1117 12:30:26.850464   16726 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:30:27.004118   16726 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:30:26.956615192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:30:27.052962   16726 out.go:176] * Using the docker driver based on user configuration
	I1117 12:30:27.053049   16726 start.go:280] selected driver: docker
	I1117 12:30:27.053061   16726 start.go:775] validating driver "docker" against <nil>
	I1117 12:30:27.053079   16726 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:30:27.056444   16726 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:30:27.206958   16726 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:30:27.160831298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:30:27.207051   16726 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:30:27.207196   16726 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:30:27.207214   16726 cni.go:93] Creating CNI manager for "kindnet"
	I1117 12:30:27.207224   16726 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 12:30:27.207229   16726 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 12:30:27.207233   16726 start_flags.go:277] Found "CNI" CNI - setting NetworkPlugin=cni
	I1117 12:30:27.207241   16726 start_flags.go:282] config:
	{Name:kindnet-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:kindnet-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:30:27.255913   16726 out.go:176] * Starting control plane node kindnet-20211117121608-2067 in cluster kindnet-20211117121608-2067
	I1117 12:30:27.255974   16726 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:30:27.282020   16726 out.go:176] * Pulling base image ...
	I1117 12:30:27.282092   16726 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:30:27.282187   16726 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:30:27.282187   16726 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:30:27.282226   16726 cache.go:57] Caching tarball of preloaded images
	I1117 12:30:27.282446   16726 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:30:27.282478   16726 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:30:27.283484   16726 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/kindnet-20211117121608-2067/config.json ...
	I1117 12:30:27.283626   16726 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/kindnet-20211117121608-2067/config.json: {Name:mk4c33b81f0b1dc6268b19d640ebeae97109459b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:30:27.398136   16726 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:30:27.398174   16726 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:30:27.398186   16726 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:30:27.398287   16726 start.go:313] acquiring machines lock for kindnet-20211117121608-2067: {Name:mk8bc8dd834a7b72a089b0a8f85b7f8472a726f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:30:27.398423   16726 start.go:317] acquired machines lock for "kindnet-20211117121608-2067" in 123.982µs
	I1117 12:30:27.398455   16726 start.go:89] Provisioning new machine with config: &{Name:kindnet-20211117121608-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:kindnet-20211117121608-2067 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:30:27.398548   16726 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:30:27.447274   16726 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:30:27.447652   16726 start.go:160] libmachine.API.Create for "kindnet-20211117121608-2067" (driver="docker")
	I1117 12:30:27.447751   16726 client.go:168] LocalClient.Create starting
	I1117 12:30:27.447953   16726 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:30:27.448042   16726 main.go:130] libmachine: Decoding PEM data...
	I1117 12:30:27.448076   16726 main.go:130] libmachine: Parsing certificate...
	I1117 12:30:27.448180   16726 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:30:27.448235   16726 main.go:130] libmachine: Decoding PEM data...
	I1117 12:30:27.448258   16726 main.go:130] libmachine: Parsing certificate...
	I1117 12:30:27.449143   16726 cli_runner.go:115] Run: docker network inspect kindnet-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:30:27.550323   16726 cli_runner.go:162] docker network inspect kindnet-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:30:27.550448   16726 network_create.go:254] running [docker network inspect kindnet-20211117121608-2067] to gather additional debugging logs...
	I1117 12:30:27.550464   16726 cli_runner.go:115] Run: docker network inspect kindnet-20211117121608-2067
	W1117 12:30:27.650467   16726 cli_runner.go:162] docker network inspect kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:30:27.650491   16726 network_create.go:257] error running [docker network inspect kindnet-20211117121608-2067]: docker network inspect kindnet-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20211117121608-2067
	I1117 12:30:27.650506   16726 network_create.go:259] output of [docker network inspect kindnet-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20211117121608-2067
	
	** /stderr **
	I1117 12:30:27.650606   16726 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:30:27.754369   16726 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e838] misses:0}
	I1117 12:30:27.754421   16726 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:30:27.754435   16726 network_create.go:106] attempt to create docker network kindnet-20211117121608-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:30:27.754511   16726 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211117121608-2067
	I1117 12:30:33.484246   16726 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211117121608-2067: (5.729722616s)
	I1117 12:30:33.484275   16726 network_create.go:90] docker network kindnet-20211117121608-2067 192.168.49.0/24 created
	I1117 12:30:33.484307   16726 kic.go:106] calculated static IP "192.168.49.2" for the "kindnet-20211117121608-2067" container
	I1117 12:30:33.484424   16726 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:30:33.584466   16726 cli_runner.go:115] Run: docker volume create kindnet-20211117121608-2067 --label name.minikube.sigs.k8s.io=kindnet-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:30:33.685704   16726 oci.go:102] Successfully created a docker volume kindnet-20211117121608-2067
	I1117 12:30:33.685830   16726 cli_runner.go:115] Run: docker run --rm --name kindnet-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211117121608-2067 --entrypoint /usr/bin/test -v kindnet-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:30:34.163313   16726 oci.go:106] Successfully prepared a docker volume kindnet-20211117121608-2067
	E1117 12:30:34.163370   16726 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:30:34.163375   16726 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:30:34.163392   16726 client.go:171] LocalClient.Create took 6.715672922s
	I1117 12:30:34.163402   16726 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:30:34.163510   16726 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:30:36.163699   16726 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:30:36.164809   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:30:36.293802   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:30:36.293901   16726 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:36.575461   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:30:36.700102   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:30:36.700185   16726 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:37.250469   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:30:37.375219   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:30:37.375292   16726 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:38.035256   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:30:38.163853   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	W1117 12:30:38.163979   16726 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	
	W1117 12:30:38.164004   16726 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:38.164016   16726 start.go:129] duration metric: createHost completed in 10.765529927s
	I1117 12:30:38.164025   16726 start.go:80] releasing machines lock for "kindnet-20211117121608-2067", held for 10.765661916s
	W1117 12:30:38.164047   16726 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:30:38.164515   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:38.285006   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:38.285050   16726 delete.go:82] Unable to get host status for kindnet-20211117121608-2067, assuming it has already been deleted: state: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	W1117 12:30:38.285179   16726 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:30:38.285190   16726 start.go:547] Will try again in 5 seconds ...
	I1117 12:30:40.630368   16726 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.466859108s)
	I1117 12:30:40.630383   16726 kic.go:188] duration metric: took 6.467023 seconds to extract preloaded images to volume
	I1117 12:30:43.286120   16726 start.go:313] acquiring machines lock for kindnet-20211117121608-2067: {Name:mk8bc8dd834a7b72a089b0a8f85b7f8472a726f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:30:43.286232   16726 start.go:317] acquired machines lock for "kindnet-20211117121608-2067" in 94.392µs
	I1117 12:30:43.286260   16726 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:30:43.286267   16726 fix.go:55] fixHost starting: 
	I1117 12:30:43.286524   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:43.405767   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:43.405821   16726 fix.go:108] recreateIfNeeded on kindnet-20211117121608-2067: state= err=unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:43.405840   16726 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:30:43.434318   16726 out.go:176] * docker "kindnet-20211117121608-2067" container is missing, will recreate.
	I1117 12:30:43.434334   16726 delete.go:124] DEMOLISHING kindnet-20211117121608-2067 ...
	I1117 12:30:43.434462   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:43.555976   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:30:43.556023   16726 stop.go:75] unable to get state: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:43.556037   16726 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:43.556472   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:43.679251   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:43.679337   16726 delete.go:82] Unable to get host status for kindnet-20211117121608-2067, assuming it has already been deleted: state: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:43.679464   16726 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kindnet-20211117121608-2067
	W1117 12:30:43.804747   16726 cli_runner.go:162] docker container inspect -f {{.Id}} kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:30:43.804778   16726 kic.go:360] could not find the container kindnet-20211117121608-2067 to remove it. will try anyways
	I1117 12:30:43.804907   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:43.931350   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:30:43.931409   16726 oci.go:83] error getting container status, will try to delete anyways: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:43.931554   16726 cli_runner.go:115] Run: docker exec --privileged -t kindnet-20211117121608-2067 /bin/bash -c "sudo init 0"
	W1117 12:30:44.059749   16726 cli_runner.go:162] docker exec --privileged -t kindnet-20211117121608-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:30:44.059780   16726 oci.go:656] error shutdown kindnet-20211117121608-2067: docker exec --privileged -t kindnet-20211117121608-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:45.061159   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:45.185110   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:45.185153   16726 oci.go:668] temporary error verifying shutdown: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:45.185161   16726 oci.go:670] temporary error: container kindnet-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:45.185185   16726 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:45.654340   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:45.789555   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:45.789597   16726 oci.go:668] temporary error verifying shutdown: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:45.789607   16726 oci.go:670] temporary error: container kindnet-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:45.789630   16726 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:46.684582   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:46.827919   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:46.827989   16726 oci.go:668] temporary error verifying shutdown: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:46.828002   16726 oci.go:670] temporary error: container kindnet-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:46.828027   16726 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:47.467135   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:47.603201   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:47.603254   16726 oci.go:668] temporary error verifying shutdown: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:47.603264   16726 oci.go:670] temporary error: container kindnet-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:47.603294   16726 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:48.716945   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:48.822001   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:48.822051   16726 oci.go:668] temporary error verifying shutdown: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:48.822062   16726 oci.go:670] temporary error: container kindnet-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:48.822084   16726 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:50.333898   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:50.435816   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:50.435853   16726 oci.go:668] temporary error verifying shutdown: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:50.435872   16726 oci.go:670] temporary error: container kindnet-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:50.435895   16726 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:53.481738   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:53.583620   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:53.583665   16726 oci.go:668] temporary error verifying shutdown: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:53.583675   16726 oci.go:670] temporary error: container kindnet-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:53.583700   16726 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:59.369598   16726 cli_runner.go:115] Run: docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}
	W1117 12:30:59.471411   16726 cli_runner.go:162] docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:30:59.471450   16726 oci.go:668] temporary error verifying shutdown: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:30:59.471459   16726 oci.go:670] temporary error: container kindnet-20211117121608-2067 status is  but expect it to be exited
	I1117 12:30:59.471484   16726 oci.go:87] couldn't shut down kindnet-20211117121608-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kindnet-20211117121608-2067": docker container inspect kindnet-20211117121608-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	 
	I1117 12:30:59.471564   16726 cli_runner.go:115] Run: docker rm -f -v kindnet-20211117121608-2067
	I1117 12:30:59.572301   16726 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kindnet-20211117121608-2067
	W1117 12:30:59.674903   16726 cli_runner.go:162] docker container inspect -f {{.Id}} kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:30:59.675022   16726 cli_runner.go:115] Run: docker network inspect kindnet-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:30:59.776362   16726 cli_runner.go:162] docker network inspect kindnet-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:30:59.776475   16726 network_create.go:254] running [docker network inspect kindnet-20211117121608-2067] to gather additional debugging logs...
	I1117 12:30:59.776495   16726 cli_runner.go:115] Run: docker network inspect kindnet-20211117121608-2067
	W1117 12:30:59.876414   16726 cli_runner.go:162] docker network inspect kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:30:59.876438   16726 network_create.go:257] error running [docker network inspect kindnet-20211117121608-2067]: docker network inspect kindnet-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20211117121608-2067
	I1117 12:30:59.876461   16726 network_create.go:259] output of [docker network inspect kindnet-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20211117121608-2067
	
	** /stderr **
	W1117 12:30:59.876729   16726 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:30:59.876736   16726 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:31:00.877883   16726 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:31:00.926446   16726 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:31:00.926593   16726 start.go:160] libmachine.API.Create for "kindnet-20211117121608-2067" (driver="docker")
	I1117 12:31:00.926624   16726 client.go:168] LocalClient.Create starting
	I1117 12:31:00.926805   16726 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:31:00.926886   16726 main.go:130] libmachine: Decoding PEM data...
	I1117 12:31:00.926913   16726 main.go:130] libmachine: Parsing certificate...
	I1117 12:31:00.927030   16726 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:31:00.927067   16726 main.go:130] libmachine: Decoding PEM data...
	I1117 12:31:00.927078   16726 main.go:130] libmachine: Parsing certificate...
	I1117 12:31:00.929251   16726 cli_runner.go:115] Run: docker network inspect kindnet-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:31:01.030582   16726 cli_runner.go:162] docker network inspect kindnet-20211117121608-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:31:01.030700   16726 network_create.go:254] running [docker network inspect kindnet-20211117121608-2067] to gather additional debugging logs...
	I1117 12:31:01.030722   16726 cli_runner.go:115] Run: docker network inspect kindnet-20211117121608-2067
	W1117 12:31:01.132171   16726 cli_runner.go:162] docker network inspect kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:31:01.132201   16726 network_create.go:257] error running [docker network inspect kindnet-20211117121608-2067]: docker network inspect kindnet-20211117121608-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20211117121608-2067
	I1117 12:31:01.132223   16726 network_create.go:259] output of [docker network inspect kindnet-20211117121608-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20211117121608-2067
	
	** /stderr **
	I1117 12:31:01.132324   16726 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:31:01.232104   16726 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e838] amended:false}} dirty:map[] misses:0}
	I1117 12:31:01.232140   16726 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:31:01.232315   16726 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e838] amended:true}} dirty:map[192.168.49.0:0xc00000e838 192.168.58.0:0xc00032a010] misses:0}
	I1117 12:31:01.232338   16726 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:31:01.232354   16726 network_create.go:106] attempt to create docker network kindnet-20211117121608-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:31:01.232459   16726 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211117121608-2067
	I1117 12:31:10.879029   16726 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211117121608-2067: (9.649859325s)
	I1117 12:31:10.879054   16726 network_create.go:90] docker network kindnet-20211117121608-2067 192.168.58.0/24 created
	I1117 12:31:10.879067   16726 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20211117121608-2067" container
	I1117 12:31:10.880150   16726 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:31:10.981630   16726 cli_runner.go:115] Run: docker volume create kindnet-20211117121608-2067 --label name.minikube.sigs.k8s.io=kindnet-20211117121608-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:31:11.083233   16726 oci.go:102] Successfully created a docker volume kindnet-20211117121608-2067
	I1117 12:31:11.083364   16726 cli_runner.go:115] Run: docker run --rm --name kindnet-20211117121608-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211117121608-2067 --entrypoint /usr/bin/test -v kindnet-20211117121608-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:31:11.507118   16726 oci.go:106] Successfully prepared a docker volume kindnet-20211117121608-2067
	E1117 12:31:11.507162   16726 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:31:11.507174   16726 client.go:171] LocalClient.Create took 10.584178255s
	I1117 12:31:11.507177   16726 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:31:11.507194   16726 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:31:11.507309   16726 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211117121608-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:31:13.515684   16726 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:31:13.515783   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:31:13.655685   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:31:13.655879   16726 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:31:13.840925   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:31:13.974960   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:31:13.975136   16726 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:31:14.314468   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:31:14.430438   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:31:14.430526   16726 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:31:14.893906   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:31:15.017511   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	W1117 12:31:15.017605   16726 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	
	W1117 12:31:15.017637   16726 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:31:15.017647   16726 start.go:129] duration metric: createHost completed in 14.144173374s
	I1117 12:31:15.019079   16726 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:31:15.019163   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:31:15.134589   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:31:15.134672   16726 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:31:15.339327   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:31:15.469234   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:31:15.469319   16726 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:31:15.772737   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:31:15.896148   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	I1117 12:31:15.896270   16726 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:31:16.564259   16726 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067
	W1117 12:31:16.713850   16726 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067 returned with exit code 1
	W1117 12:31:16.713957   16726 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	
	W1117 12:31:16.713989   16726 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-20211117121608-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211117121608-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-20211117121608-2067
	I1117 12:31:16.714003   16726 fix.go:57] fixHost completed within 33.43911469s
	I1117 12:31:16.714013   16726 start.go:80] releasing machines lock for "kindnet-20211117121608-2067", held for 33.439152678s
	W1117 12:31:16.714149   16726 out.go:241] * Failed to start docker container. Running "minikube delete -p kindnet-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p kindnet-20211117121608-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:31:16.766616   16726 out.go:176] 
	W1117 12:31:16.766771   16726 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:31:16.766787   16726 out.go:241] * 
	* 
	W1117 12:31:16.767429   16726 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:31:16.991415   16726 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (50.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20211117121607-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p bridge-20211117121607-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : exit status 80 (49.250758691s)

                                                
                                                
-- stdout --
	* [bridge-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node bridge-20211117121607-2067 in cluster bridge-20211117121607-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "bridge-20211117121607-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:30:56.623324   17014 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:30:56.623455   17014 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:30:56.623459   17014 out.go:310] Setting ErrFile to fd 2...
	I1117 12:30:56.623463   17014 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:30:56.623551   17014 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:30:56.623882   17014 out.go:304] Setting JSON to false
	I1117 12:30:56.650330   17014 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3631,"bootTime":1637177425,"procs":341,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:30:56.650431   17014 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:30:56.677224   17014 out.go:176] * [bridge-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:30:56.677416   17014 notify.go:174] Checking for updates...
	I1117 12:30:56.724795   17014 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:30:56.754728   17014 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:30:56.780454   17014 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:30:56.806551   17014 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:30:56.807170   17014 config.go:176] Loaded profile config "kindnet-20211117121608-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:30:56.807328   17014 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:30:56.807396   17014 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:30:56.897455   17014 docker.go:132] docker version: linux-20.10.5
	I1117 12:30:56.897609   17014 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:30:57.054452   17014 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:30:57.008663771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:30:57.103105   17014 out.go:176] * Using the docker driver based on user configuration
	I1117 12:30:57.103193   17014 start.go:280] selected driver: docker
	I1117 12:30:57.103203   17014 start.go:775] validating driver "docker" against <nil>
	I1117 12:30:57.103222   17014 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:30:57.107005   17014 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:30:57.258707   17014 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:30:57.216284976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:30:57.258799   17014 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:30:57.258910   17014 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:30:57.258926   17014 cni.go:93] Creating CNI manager for "bridge"
	I1117 12:30:57.258935   17014 start_flags.go:277] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1117 12:30:57.258945   17014 start_flags.go:282] config:
	{Name:bridge-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:bridge-20211117121607-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:30:57.307973   17014 out.go:176] * Starting control plane node bridge-20211117121607-2067 in cluster bridge-20211117121607-2067
	I1117 12:30:57.308033   17014 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:30:57.333790   17014 out.go:176] * Pulling base image ...
	I1117 12:30:57.333870   17014 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:30:57.333935   17014 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:30:57.333957   17014 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:30:57.333988   17014 cache.go:57] Caching tarball of preloaded images
	I1117 12:30:57.334850   17014 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:30:57.335138   17014 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:30:57.335649   17014 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/bridge-20211117121607-2067/config.json ...
	I1117 12:30:57.336351   17014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/bridge-20211117121607-2067/config.json: {Name:mk1aa862636a81d8c1990464ec498fc2d6390d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:30:57.450500   17014 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:30:57.450520   17014 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:30:57.450533   17014 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:30:57.450579   17014 start.go:313] acquiring machines lock for bridge-20211117121607-2067: {Name:mk92c1a5481a644e37cada8c458b44ba92515575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:30:57.451350   17014 start.go:317] acquired machines lock for "bridge-20211117121607-2067" in 758.444µs
	I1117 12:30:57.451381   17014 start.go:89] Provisioning new machine with config: &{Name:bridge-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:bridge-20211117121607-2067 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:30:57.451435   17014 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:30:57.505587   17014 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:30:57.505938   17014 start.go:160] libmachine.API.Create for "bridge-20211117121607-2067" (driver="docker")
	I1117 12:30:57.505982   17014 client.go:168] LocalClient.Create starting
	I1117 12:30:57.506180   17014 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:30:57.506281   17014 main.go:130] libmachine: Decoding PEM data...
	I1117 12:30:57.506313   17014 main.go:130] libmachine: Parsing certificate...
	I1117 12:30:57.506417   17014 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:30:57.506485   17014 main.go:130] libmachine: Decoding PEM data...
	I1117 12:30:57.506503   17014 main.go:130] libmachine: Parsing certificate...
	I1117 12:30:57.507521   17014 cli_runner.go:115] Run: docker network inspect bridge-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:30:57.611118   17014 cli_runner.go:162] docker network inspect bridge-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:30:57.611226   17014 network_create.go:254] running [docker network inspect bridge-20211117121607-2067] to gather additional debugging logs...
	I1117 12:30:57.611244   17014 cli_runner.go:115] Run: docker network inspect bridge-20211117121607-2067
	W1117 12:30:57.712068   17014 cli_runner.go:162] docker network inspect bridge-20211117121607-2067 returned with exit code 1
	I1117 12:30:57.712095   17014 network_create.go:257] error running [docker network inspect bridge-20211117121607-2067]: docker network inspect bridge-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20211117121607-2067
	I1117 12:30:57.712109   17014 network_create.go:259] output of [docker network inspect bridge-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20211117121607-2067
	
	** /stderr **
	I1117 12:30:57.712213   17014 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:30:57.816068   17014 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006531a0] misses:0}
	I1117 12:30:57.816106   17014 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:30:57.816122   17014 network_create.go:106] attempt to create docker network bridge-20211117121607-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:30:57.816207   17014 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117121607-2067
	I1117 12:31:03.741232   17014 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117121607-2067: (5.927806053s)
	I1117 12:31:03.741254   17014 network_create.go:90] docker network bridge-20211117121607-2067 192.168.49.0/24 created
	I1117 12:31:03.741269   17014 kic.go:106] calculated static IP "192.168.49.2" for the "bridge-20211117121607-2067" container
	I1117 12:31:03.741378   17014 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:31:03.840411   17014 cli_runner.go:115] Run: docker volume create bridge-20211117121607-2067 --label name.minikube.sigs.k8s.io=bridge-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:31:03.941757   17014 oci.go:102] Successfully created a docker volume bridge-20211117121607-2067
	I1117 12:31:03.941879   17014 cli_runner.go:115] Run: docker run --rm --name bridge-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20211117121607-2067 --entrypoint /usr/bin/test -v bridge-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:31:04.412096   17014 oci.go:106] Successfully prepared a docker volume bridge-20211117121607-2067
	E1117 12:31:04.412150   17014 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:31:04.412154   17014 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:31:04.412175   17014 client.go:171] LocalClient.Create took 6.909456211s
	I1117 12:31:04.412179   17014 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:31:04.412284   17014 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:31:06.416685   17014 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:31:06.416807   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:06.549755   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:06.549861   17014 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:06.826621   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:06.953779   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:06.953930   17014 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:07.494203   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:07.610105   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:07.610203   17014 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:08.274532   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:08.399614   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	W1117 12:31:08.399711   17014 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	
	W1117 12:31:08.399739   17014 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:08.399770   17014 start.go:129] duration metric: createHost completed in 10.952948241s
	I1117 12:31:08.399800   17014 start.go:80] releasing machines lock for "bridge-20211117121607-2067", held for 10.95307465s
	W1117 12:31:08.399815   17014 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:31:08.400356   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:08.521814   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:08.521861   17014 delete.go:82] Unable to get host status for bridge-20211117121607-2067, assuming it has already been deleted: state: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	W1117 12:31:08.521998   17014 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:31:08.522014   17014 start.go:547] Will try again in 5 seconds ...
	I1117 12:31:10.383748   17014 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.973298693s)
	I1117 12:31:10.383766   17014 kic.go:188] duration metric: took 5.973470 seconds to extract preloaded images to volume
	I1117 12:31:13.523783   17014 start.go:313] acquiring machines lock for bridge-20211117121607-2067: {Name:mk92c1a5481a644e37cada8c458b44ba92515575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:31:13.523879   17014 start.go:317] acquired machines lock for "bridge-20211117121607-2067" in 74.769µs
	I1117 12:31:13.523907   17014 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:31:13.523917   17014 fix.go:55] fixHost starting: 
	I1117 12:31:13.524363   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:13.664461   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:13.664521   17014 fix.go:108] recreateIfNeeded on bridge-20211117121607-2067: state= err=unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:13.664545   17014 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:31:13.693034   17014 out.go:176] * docker "bridge-20211117121607-2067" container is missing, will recreate.
	I1117 12:31:13.693053   17014 delete.go:124] DEMOLISHING bridge-20211117121607-2067 ...
	I1117 12:31:13.693214   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:13.821407   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:31:13.821487   17014 stop.go:75] unable to get state: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:13.821502   17014 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:13.822026   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:13.951498   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:13.951569   17014 delete.go:82] Unable to get host status for bridge-20211117121607-2067, assuming it has already been deleted: state: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:13.951692   17014 cli_runner.go:115] Run: docker container inspect -f {{.Id}} bridge-20211117121607-2067
	W1117 12:31:14.072528   17014 cli_runner.go:162] docker container inspect -f {{.Id}} bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:14.072559   17014 kic.go:360] could not find the container bridge-20211117121607-2067 to remove it. will try anyways
	I1117 12:31:14.072682   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:14.189238   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:31:14.189292   17014 oci.go:83] error getting container status, will try to delete anyways: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:14.189384   17014 cli_runner.go:115] Run: docker exec --privileged -t bridge-20211117121607-2067 /bin/bash -c "sudo init 0"
	W1117 12:31:14.322858   17014 cli_runner.go:162] docker exec --privileged -t bridge-20211117121607-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:31:14.322889   17014 oci.go:656] error shutdown bridge-20211117121607-2067: docker exec --privileged -t bridge-20211117121607-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:15.323130   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:15.452796   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:15.452840   17014 oci.go:668] temporary error verifying shutdown: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:15.452847   17014 oci.go:670] temporary error: container bridge-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:15.452869   17014 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:15.922736   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:16.054062   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:16.054123   17014 oci.go:668] temporary error verifying shutdown: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:16.054141   17014 oci.go:670] temporary error: container bridge-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:16.054189   17014 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:16.944342   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:17.078330   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:17.078375   17014 oci.go:668] temporary error verifying shutdown: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:17.078386   17014 oci.go:670] temporary error: container bridge-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:17.078410   17014 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:17.724313   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:17.831197   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:17.831236   17014 oci.go:668] temporary error verifying shutdown: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:17.831244   17014 oci.go:670] temporary error: container bridge-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:17.831267   17014 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:18.940173   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:19.043693   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:19.043737   17014 oci.go:668] temporary error verifying shutdown: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:19.043746   17014 oci.go:670] temporary error: container bridge-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:19.043768   17014 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:20.563952   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:20.665995   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:20.666037   17014 oci.go:668] temporary error verifying shutdown: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:20.666048   17014 oci.go:670] temporary error: container bridge-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:20.666070   17014 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:23.713162   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:23.813790   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:23.813830   17014 oci.go:668] temporary error verifying shutdown: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:23.813838   17014 oci.go:670] temporary error: container bridge-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:23.813858   17014 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:29.596373   17014 cli_runner.go:115] Run: docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:29.696176   17014 cli_runner.go:162] docker container inspect bridge-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:29.696216   17014 oci.go:668] temporary error verifying shutdown: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:29.696224   17014 oci.go:670] temporary error: container bridge-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:29.696249   17014 oci.go:87] couldn't shut down bridge-20211117121607-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "bridge-20211117121607-2067": docker container inspect bridge-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	 
	I1117 12:31:29.696324   17014 cli_runner.go:115] Run: docker rm -f -v bridge-20211117121607-2067
	I1117 12:31:29.797128   17014 cli_runner.go:115] Run: docker container inspect -f {{.Id}} bridge-20211117121607-2067
	W1117 12:31:29.896304   17014 cli_runner.go:162] docker container inspect -f {{.Id}} bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:29.896412   17014 cli_runner.go:115] Run: docker network inspect bridge-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:31:29.998230   17014 cli_runner.go:162] docker network inspect bridge-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:31:29.998326   17014 network_create.go:254] running [docker network inspect bridge-20211117121607-2067] to gather additional debugging logs...
	I1117 12:31:29.998345   17014 cli_runner.go:115] Run: docker network inspect bridge-20211117121607-2067
	W1117 12:31:30.099260   17014 cli_runner.go:162] docker network inspect bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:30.099284   17014 network_create.go:257] error running [docker network inspect bridge-20211117121607-2067]: docker network inspect bridge-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20211117121607-2067
	I1117 12:31:30.099296   17014 network_create.go:259] output of [docker network inspect bridge-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20211117121607-2067
	
	** /stderr **
	W1117 12:31:30.099560   17014 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:31:30.099569   17014 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:31:31.099782   17014 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:31:31.126861   17014 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:31:31.127020   17014 start.go:160] libmachine.API.Create for "bridge-20211117121607-2067" (driver="docker")
	I1117 12:31:31.127085   17014 client.go:168] LocalClient.Create starting
	I1117 12:31:31.127253   17014 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:31:31.127331   17014 main.go:130] libmachine: Decoding PEM data...
	I1117 12:31:31.127358   17014 main.go:130] libmachine: Parsing certificate...
	I1117 12:31:31.127451   17014 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:31:31.127523   17014 main.go:130] libmachine: Decoding PEM data...
	I1117 12:31:31.127543   17014 main.go:130] libmachine: Parsing certificate...
	I1117 12:31:31.149076   17014 cli_runner.go:115] Run: docker network inspect bridge-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:31:31.250637   17014 cli_runner.go:162] docker network inspect bridge-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:31:31.250748   17014 network_create.go:254] running [docker network inspect bridge-20211117121607-2067] to gather additional debugging logs...
	I1117 12:31:31.250766   17014 cli_runner.go:115] Run: docker network inspect bridge-20211117121607-2067
	W1117 12:31:31.351364   17014 cli_runner.go:162] docker network inspect bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:31.351389   17014 network_create.go:257] error running [docker network inspect bridge-20211117121607-2067]: docker network inspect bridge-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20211117121607-2067
	I1117 12:31:31.351413   17014 network_create.go:259] output of [docker network inspect bridge-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20211117121607-2067
	
	** /stderr **
	I1117 12:31:31.351503   17014 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:31:31.455875   17014 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006531a0] amended:false}} dirty:map[] misses:0}
	I1117 12:31:31.455913   17014 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:31:31.456079   17014 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006531a0] amended:true}} dirty:map[192.168.49.0:0xc0006531a0 192.168.58.0:0xc000320058] misses:0}
	I1117 12:31:31.456093   17014 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:31:31.456100   17014 network_create.go:106] attempt to create docker network bridge-20211117121607-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:31:31.456176   17014 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117121607-2067
	I1117 12:31:39.894697   17014 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20211117121607-2067: (8.438949742s)
	I1117 12:31:39.894727   17014 network_create.go:90] docker network bridge-20211117121607-2067 192.168.58.0/24 created
	I1117 12:31:39.894744   17014 kic.go:106] calculated static IP "192.168.58.2" for the "bridge-20211117121607-2067" container
	I1117 12:31:39.894879   17014 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:31:39.996863   17014 cli_runner.go:115] Run: docker volume create bridge-20211117121607-2067 --label name.minikube.sigs.k8s.io=bridge-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:31:40.097813   17014 oci.go:102] Successfully created a docker volume bridge-20211117121607-2067
	I1117 12:31:40.097948   17014 cli_runner.go:115] Run: docker run --rm --name bridge-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20211117121607-2067 --entrypoint /usr/bin/test -v bridge-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:31:40.508777   17014 oci.go:106] Successfully prepared a docker volume bridge-20211117121607-2067
	E1117 12:31:40.508826   17014 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:31:40.508841   17014 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:31:40.508845   17014 client.go:171] LocalClient.Create took 9.382299879s
	I1117 12:31:40.508857   17014 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:31:40.508969   17014 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:31:42.511707   17014 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:31:42.511800   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:42.662726   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:42.662825   17014 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:42.842142   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:42.953047   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:42.953129   17014 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:43.290628   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:43.391821   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:43.391904   17014 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:43.861681   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:43.981233   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	W1117 12:31:43.981329   17014 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	
	W1117 12:31:43.981351   17014 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:43.981360   17014 start.go:129] duration metric: createHost completed in 12.882257835s
	I1117 12:31:43.981428   17014 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:31:43.981492   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:44.100526   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:44.100613   17014 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:44.297811   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:44.426091   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:44.426177   17014 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:44.724377   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:44.844881   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	I1117 12:31:44.845042   17014 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:45.511660   17014 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067
	W1117 12:31:45.642731   17014 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067 returned with exit code 1
	W1117 12:31:45.642815   17014 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	
	W1117 12:31:45.642841   17014 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-20211117121607-2067
	I1117 12:31:45.642854   17014 fix.go:57] fixHost completed within 32.12200299s
	I1117 12:31:45.642869   17014 start.go:80] releasing machines lock for "bridge-20211117121607-2067", held for 32.122044761s
	W1117 12:31:45.643035   17014 out.go:241] * Failed to start docker container. Running "minikube delete -p bridge-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p bridge-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:31:45.697355   17014 out.go:176] 
	W1117 12:31:45.697515   17014 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:31:45.697533   17014 out.go:241] * 
	* 
	W1117 12:31:45.698504   17014 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:31:45.822332   17014 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (49.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20211117121607-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-20211117121607-2067 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : exit status 80 (49.071045558s)

                                                
                                                
-- stdout --
	* [kubenet-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kubenet-20211117121607-2067 in cluster kubenet-20211117121607-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "kubenet-20211117121607-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:31:25.977759   17288 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:31:25.977891   17288 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:31:25.977896   17288 out.go:310] Setting ErrFile to fd 2...
	I1117 12:31:25.977900   17288 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:31:25.977998   17288 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:31:25.978337   17288 out.go:304] Setting JSON to false
	I1117 12:31:26.003887   17288 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3660,"bootTime":1637177425,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:31:26.003983   17288 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:31:26.030486   17288 out.go:176] * [kubenet-20211117121607-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:31:26.030715   17288 notify.go:174] Checking for updates...
	I1117 12:31:26.078074   17288 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:31:26.104245   17288 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:31:26.130904   17288 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:31:26.157258   17288 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:31:26.158063   17288 config.go:176] Loaded profile config "bridge-20211117121607-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:31:26.158234   17288 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:31:26.158296   17288 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:31:26.249811   17288 docker.go:132] docker version: linux-20.10.5
	I1117 12:31:26.249947   17288 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:31:26.402945   17288 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:31:26.356109886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:31:26.450845   17288 out.go:176] * Using the docker driver based on user configuration
	I1117 12:31:26.450928   17288 start.go:280] selected driver: docker
	I1117 12:31:26.450941   17288 start.go:775] validating driver "docker" against <nil>
	I1117 12:31:26.450966   17288 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:31:26.454367   17288 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:31:26.619185   17288 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:31:26.559448514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:31:26.619335   17288 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:31:26.619535   17288 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:31:26.619562   17288 cni.go:89] network plugin configured as "kubenet", returning disabled
	I1117 12:31:26.619576   17288 start_flags.go:282] config:
	{Name:kubenet-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:kubenet-20211117121607-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:31:26.666436   17288 out.go:176] * Starting control plane node kubenet-20211117121607-2067 in cluster kubenet-20211117121607-2067
	I1117 12:31:26.666502   17288 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:31:26.692613   17288 out.go:176] * Pulling base image ...
	I1117 12:31:26.692706   17288 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:31:26.692720   17288 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:31:26.692799   17288 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:31:26.692830   17288 cache.go:57] Caching tarball of preloaded images
	I1117 12:31:26.693047   17288 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:31:26.693065   17288 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:31:26.694198   17288 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/kubenet-20211117121607-2067/config.json ...
	I1117 12:31:26.694384   17288 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/kubenet-20211117121607-2067/config.json: {Name:mk95278a8f5991c2de8d70e635d062ca8ea33d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:31:26.806719   17288 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:31:26.806738   17288 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:31:26.806755   17288 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:31:26.806802   17288 start.go:313] acquiring machines lock for kubenet-20211117121607-2067: {Name:mkf4860d738cc91c31dc6e6c0eed2739cb4e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:31:26.807900   17288 start.go:317] acquired machines lock for "kubenet-20211117121607-2067" in 1.080934ms
	I1117 12:31:26.807930   17288 start.go:89] Provisioning new machine with config: &{Name:kubenet-20211117121607-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:kubenet-20211117121607-2067 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:31:26.808011   17288 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:31:26.856421   17288 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:31:26.856811   17288 start.go:160] libmachine.API.Create for "kubenet-20211117121607-2067" (driver="docker")
	I1117 12:31:26.856857   17288 client.go:168] LocalClient.Create starting
	I1117 12:31:26.857028   17288 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:31:26.857114   17288 main.go:130] libmachine: Decoding PEM data...
	I1117 12:31:26.857148   17288 main.go:130] libmachine: Parsing certificate...
	I1117 12:31:26.857275   17288 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:31:26.857333   17288 main.go:130] libmachine: Decoding PEM data...
	I1117 12:31:26.857350   17288 main.go:130] libmachine: Parsing certificate...
	I1117 12:31:26.858355   17288 cli_runner.go:115] Run: docker network inspect kubenet-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:31:26.957793   17288 cli_runner.go:162] docker network inspect kubenet-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:31:26.957902   17288 network_create.go:254] running [docker network inspect kubenet-20211117121607-2067] to gather additional debugging logs...
	I1117 12:31:26.957946   17288 cli_runner.go:115] Run: docker network inspect kubenet-20211117121607-2067
	W1117 12:31:27.058089   17288 cli_runner.go:162] docker network inspect kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:31:27.058111   17288 network_create.go:257] error running [docker network inspect kubenet-20211117121607-2067]: docker network inspect kubenet-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20211117121607-2067
	I1117 12:31:27.058125   17288 network_create.go:259] output of [docker network inspect kubenet-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20211117121607-2067
	
	** /stderr **
	I1117 12:31:27.058216   17288 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:31:27.158629   17288 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000112150] misses:0}
	I1117 12:31:27.158666   17288 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:31:27.158683   17288 network_create.go:106] attempt to create docker network kubenet-20211117121607-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:31:27.158766   17288 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20211117121607-2067
	I1117 12:31:32.705331   17288 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20211117121607-2067: (5.546953665s)
	I1117 12:31:32.705354   17288 network_create.go:90] docker network kubenet-20211117121607-2067 192.168.49.0/24 created
	I1117 12:31:32.705369   17288 kic.go:106] calculated static IP "192.168.49.2" for the "kubenet-20211117121607-2067" container
	I1117 12:31:32.705503   17288 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:31:32.805346   17288 cli_runner.go:115] Run: docker volume create kubenet-20211117121607-2067 --label name.minikube.sigs.k8s.io=kubenet-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:31:32.905326   17288 oci.go:102] Successfully created a docker volume kubenet-20211117121607-2067
	I1117 12:31:32.905453   17288 cli_runner.go:115] Run: docker run --rm --name kubenet-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20211117121607-2067 --entrypoint /usr/bin/test -v kubenet-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:31:33.396948   17288 oci.go:106] Successfully prepared a docker volume kubenet-20211117121607-2067
	E1117 12:31:33.397002   17288 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:31:33.397017   17288 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:31:33.397025   17288 client.go:171] LocalClient.Create took 6.540680821s
	I1117 12:31:33.397036   17288 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:31:33.397146   17288 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:31:35.397184   17288 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:31:35.397301   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:31:35.523131   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:31:35.523214   17288 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:35.799627   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:31:35.931062   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:31:35.931191   17288 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:36.471776   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:31:36.616023   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:31:36.616116   17288 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:37.273038   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:31:37.403253   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	W1117 12:31:37.403344   17288 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	
	W1117 12:31:37.403363   17288 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:37.403373   17288 start.go:129] duration metric: createHost completed in 10.596119535s
	I1117 12:31:37.403379   17288 start.go:80] releasing machines lock for "kubenet-20211117121607-2067", held for 10.596234699s
	W1117 12:31:37.403396   17288 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:31:37.403879   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:37.532025   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:37.532090   17288 delete.go:82] Unable to get host status for kubenet-20211117121607-2067, assuming it has already been deleted: state: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	W1117 12:31:37.532228   17288 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:31:37.532239   17288 start.go:547] Will try again in 5 seconds ...
	I1117 12:31:39.566882   17288 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.170057222s)
	I1117 12:31:39.566923   17288 kic.go:188] duration metric: took 6.170233 seconds to extract preloaded images to volume
	I1117 12:31:42.536704   17288 start.go:313] acquiring machines lock for kubenet-20211117121607-2067: {Name:mkf4860d738cc91c31dc6e6c0eed2739cb4e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:31:42.536849   17288 start.go:317] acquired machines lock for "kubenet-20211117121607-2067" in 120.24µs
	I1117 12:31:42.536889   17288 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:31:42.536902   17288 fix.go:55] fixHost starting: 
	I1117 12:31:42.537374   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:42.681680   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:42.681719   17288 fix.go:108] recreateIfNeeded on kubenet-20211117121607-2067: state= err=unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:42.681736   17288 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:31:42.708404   17288 out.go:176] * docker "kubenet-20211117121607-2067" container is missing, will recreate.
	I1117 12:31:42.708422   17288 delete.go:124] DEMOLISHING kubenet-20211117121607-2067 ...
	I1117 12:31:42.708632   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:42.812102   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:31:42.812151   17288 stop.go:75] unable to get state: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:42.812173   17288 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:42.812565   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:42.924619   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:42.924676   17288 delete.go:82] Unable to get host status for kubenet-20211117121607-2067, assuming it has already been deleted: state: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:42.924761   17288 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubenet-20211117121607-2067
	W1117 12:31:43.026915   17288 cli_runner.go:162] docker container inspect -f {{.Id}} kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:31:43.026942   17288 kic.go:360] could not find the container kubenet-20211117121607-2067 to remove it. will try anyways
	I1117 12:31:43.027014   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:43.128650   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:31:43.128695   17288 oci.go:83] error getting container status, will try to delete anyways: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:43.128797   17288 cli_runner.go:115] Run: docker exec --privileged -t kubenet-20211117121607-2067 /bin/bash -c "sudo init 0"
	W1117 12:31:43.233014   17288 cli_runner.go:162] docker exec --privileged -t kubenet-20211117121607-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:31:43.233037   17288 oci.go:656] error shutdown kubenet-20211117121607-2067: docker exec --privileged -t kubenet-20211117121607-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:44.238030   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:44.370228   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:44.370315   17288 oci.go:668] temporary error verifying shutdown: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:44.370326   17288 oci.go:670] temporary error: container kubenet-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:44.370351   17288 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:44.836663   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:44.959084   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:44.959125   17288 oci.go:668] temporary error verifying shutdown: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:44.959141   17288 oci.go:670] temporary error: container kubenet-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:44.959166   17288 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:45.849555   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:46.008319   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:46.008374   17288 oci.go:668] temporary error verifying shutdown: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:46.008387   17288 oci.go:670] temporary error: container kubenet-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:46.008417   17288 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:46.648607   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:46.785080   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:46.785121   17288 oci.go:668] temporary error verifying shutdown: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:46.785129   17288 oci.go:670] temporary error: container kubenet-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:46.785153   17288 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:47.896454   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:47.998807   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:47.998852   17288 oci.go:668] temporary error verifying shutdown: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:47.998877   17288 oci.go:670] temporary error: container kubenet-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:47.998907   17288 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:49.519959   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:49.621706   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:49.621760   17288 oci.go:668] temporary error verifying shutdown: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:49.621768   17288 oci.go:670] temporary error: container kubenet-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:49.621791   17288 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:52.669883   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:52.770808   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:52.770849   17288 oci.go:668] temporary error verifying shutdown: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:52.770859   17288 oci.go:670] temporary error: container kubenet-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:52.770881   17288 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:58.562333   17288 cli_runner.go:115] Run: docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}
	W1117 12:31:58.665752   17288 cli_runner.go:162] docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:31:58.665800   17288 oci.go:668] temporary error verifying shutdown: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:31:58.665809   17288 oci.go:670] temporary error: container kubenet-20211117121607-2067 status is  but expect it to be exited
	I1117 12:31:58.665835   17288 oci.go:87] couldn't shut down kubenet-20211117121607-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubenet-20211117121607-2067": docker container inspect kubenet-20211117121607-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	 
	I1117 12:31:58.665922   17288 cli_runner.go:115] Run: docker rm -f -v kubenet-20211117121607-2067
	I1117 12:31:58.768564   17288 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubenet-20211117121607-2067
	W1117 12:31:58.871098   17288 cli_runner.go:162] docker container inspect -f {{.Id}} kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:31:58.871214   17288 cli_runner.go:115] Run: docker network inspect kubenet-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:31:58.972712   17288 cli_runner.go:162] docker network inspect kubenet-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:31:58.972816   17288 network_create.go:254] running [docker network inspect kubenet-20211117121607-2067] to gather additional debugging logs...
	I1117 12:31:58.972829   17288 cli_runner.go:115] Run: docker network inspect kubenet-20211117121607-2067
	W1117 12:31:59.072562   17288 cli_runner.go:162] docker network inspect kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:31:59.072589   17288 network_create.go:257] error running [docker network inspect kubenet-20211117121607-2067]: docker network inspect kubenet-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20211117121607-2067
	I1117 12:31:59.072601   17288 network_create.go:259] output of [docker network inspect kubenet-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20211117121607-2067
	
	** /stderr **
	W1117 12:31:59.073482   17288 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:31:59.073489   17288 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:32:00.073689   17288 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:32:00.100868   17288 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 12:32:00.100973   17288 start.go:160] libmachine.API.Create for "kubenet-20211117121607-2067" (driver="docker")
	I1117 12:32:00.101006   17288 client.go:168] LocalClient.Create starting
	I1117 12:32:00.101161   17288 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:32:00.101220   17288 main.go:130] libmachine: Decoding PEM data...
	I1117 12:32:00.101237   17288 main.go:130] libmachine: Parsing certificate...
	I1117 12:32:00.101306   17288 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:32:00.101342   17288 main.go:130] libmachine: Decoding PEM data...
	I1117 12:32:00.101353   17288 main.go:130] libmachine: Parsing certificate...
	I1117 12:32:00.101787   17288 cli_runner.go:115] Run: docker network inspect kubenet-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:32:00.202549   17288 cli_runner.go:162] docker network inspect kubenet-20211117121607-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:32:00.202656   17288 network_create.go:254] running [docker network inspect kubenet-20211117121607-2067] to gather additional debugging logs...
	I1117 12:32:00.202673   17288 cli_runner.go:115] Run: docker network inspect kubenet-20211117121607-2067
	W1117 12:32:00.303150   17288 cli_runner.go:162] docker network inspect kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:32:00.303174   17288 network_create.go:257] error running [docker network inspect kubenet-20211117121607-2067]: docker network inspect kubenet-20211117121607-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20211117121607-2067
	I1117 12:32:00.303197   17288 network_create.go:259] output of [docker network inspect kubenet-20211117121607-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20211117121607-2067
	
	** /stderr **
	I1117 12:32:00.303304   17288 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:32:00.405897   17288 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000112150] amended:false}} dirty:map[] misses:0}
	I1117 12:32:00.405927   17288 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:32:00.406098   17288 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000112150] amended:true}} dirty:map[192.168.49.0:0xc000112150 192.168.58.0:0xc00064c230] misses:0}
	I1117 12:32:00.406115   17288 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:32:00.406128   17288 network_create.go:106] attempt to create docker network kubenet-20211117121607-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:32:00.406208   17288 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20211117121607-2067
	I1117 12:32:09.064831   17288 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20211117121607-2067: (8.658702886s)
	I1117 12:32:09.064862   17288 network_create.go:90] docker network kubenet-20211117121607-2067 192.168.58.0/24 created
	I1117 12:32:09.064875   17288 kic.go:106] calculated static IP "192.168.58.2" for the "kubenet-20211117121607-2067" container
	I1117 12:32:09.066456   17288 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:32:09.169632   17288 cli_runner.go:115] Run: docker volume create kubenet-20211117121607-2067 --label name.minikube.sigs.k8s.io=kubenet-20211117121607-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:32:09.301434   17288 oci.go:102] Successfully created a docker volume kubenet-20211117121607-2067
	I1117 12:32:09.301555   17288 cli_runner.go:115] Run: docker run --rm --name kubenet-20211117121607-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20211117121607-2067 --entrypoint /usr/bin/test -v kubenet-20211117121607-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:32:09.695966   17288 oci.go:106] Successfully prepared a docker volume kubenet-20211117121607-2067
	E1117 12:32:09.696020   17288 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:32:09.696030   17288 client.go:171] LocalClient.Create took 9.595178109s
	I1117 12:32:09.696038   17288 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:32:09.696067   17288 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:32:09.696183   17288 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20211117121607-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:32:11.696482   17288 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:32:11.696638   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:32:11.831413   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:32:11.831571   17288 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:32:12.019528   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:32:12.154440   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:32:12.154528   17288 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:32:12.485994   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:32:12.616445   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:32:12.616556   17288 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:32:13.085981   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:32:13.210105   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	W1117 12:32:13.210204   17288 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	
	W1117 12:32:13.210233   17288 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:32:13.210246   17288 start.go:129] duration metric: createHost completed in 13.1367506s
	I1117 12:32:13.210314   17288 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:32:13.210375   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:32:13.320863   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:32:13.320950   17288 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:32:13.519529   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:32:13.642065   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:32:13.642160   17288 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:32:13.945709   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:32:14.070860   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	I1117 12:32:14.070947   17288 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:32:14.737133   17288 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067
	W1117 12:32:14.859810   17288 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067 returned with exit code 1
	W1117 12:32:14.859895   17288 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	
	W1117 12:32:14.859917   17288 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-20211117121607-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20211117121607-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-20211117121607-2067
	I1117 12:32:14.859934   17288 fix.go:57] fixHost completed within 32.32375368s
	I1117 12:32:14.859943   17288 start.go:80] releasing machines lock for "kubenet-20211117121607-2067", held for 32.323806054s
	W1117 12:32:14.860085   17288 out.go:241] * Failed to start docker container. Running "minikube delete -p kubenet-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p kubenet-20211117121607-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:32:14.907467   17288 out.go:176] 
	W1117 12:32:14.907572   17288 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:32:14.907583   17288 out.go:241] * 
	* 
	W1117 12:32:14.908212   17288 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:32:14.985520   17288 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (49.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (46.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20211117123155-2067 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20211117123155-2067 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0: exit status 80 (46.023442666s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211117123155-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node old-k8s-version-20211117123155-2067 in cluster old-k8s-version-20211117123155-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20211117123155-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:31:55.677155   17571 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:31:55.677357   17571 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:31:55.677364   17571 out.go:310] Setting ErrFile to fd 2...
	I1117 12:31:55.677368   17571 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:31:55.677486   17571 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:31:55.677915   17571 out.go:304] Setting JSON to false
	I1117 12:31:55.706728   17571 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3690,"bootTime":1637177425,"procs":331,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:31:55.706833   17571 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:31:55.734960   17571 out.go:176] * [old-k8s-version-20211117123155-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:31:55.735150   17571 notify.go:174] Checking for updates...
	I1117 12:31:55.783192   17571 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:31:55.814034   17571 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:31:55.840049   17571 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:31:55.865872   17571 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:31:55.866271   17571 config.go:176] Loaded profile config "kubenet-20211117121607-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:31:55.866351   17571 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:31:55.866395   17571 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:31:55.976390   17571 docker.go:132] docker version: linux-20.10.5
	I1117 12:31:55.976530   17571 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:31:56.129922   17571 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:31:56.081098075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:31:56.177168   17571 out.go:176] * Using the docker driver based on user configuration
	I1117 12:31:56.177223   17571 start.go:280] selected driver: docker
	I1117 12:31:56.177236   17571 start.go:775] validating driver "docker" against <nil>
	I1117 12:31:56.177257   17571 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:31:56.180913   17571 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:31:56.333051   17571 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:31:56.285140784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:31:56.333152   17571 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:31:56.333268   17571 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:31:56.333286   17571 cni.go:93] Creating CNI manager for ""
	I1117 12:31:56.333293   17571 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:31:56.333304   17571 start_flags.go:282] config:
	{Name:old-k8s-version-20211117123155-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20211117123155-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:31:56.382199   17571 out.go:176] * Starting control plane node old-k8s-version-20211117123155-2067 in cluster old-k8s-version-20211117123155-2067
	I1117 12:31:56.382285   17571 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:31:56.408275   17571 out.go:176] * Pulling base image ...
	I1117 12:31:56.408402   17571 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:31:56.408501   17571 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 12:31:56.408512   17571 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:31:56.408532   17571 cache.go:57] Caching tarball of preloaded images
	I1117 12:31:56.408846   17571 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:31:56.408879   17571 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I1117 12:31:56.409623   17571 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/old-k8s-version-20211117123155-2067/config.json ...
	I1117 12:31:56.409841   17571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/old-k8s-version-20211117123155-2067/config.json: {Name:mk0a3a8a5e804e3e8d23cbd49762ec7c3102bb10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:31:56.527282   17571 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:31:56.527301   17571 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:31:56.527317   17571 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:31:56.527359   17571 start.go:313] acquiring machines lock for old-k8s-version-20211117123155-2067: {Name:mkdcdd296c413d69b1c0c600c9bbeca63dadcf75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:31:56.528432   17571 start.go:317] acquired machines lock for "old-k8s-version-20211117123155-2067" in 1.058513ms
	I1117 12:31:56.528462   17571 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20211117123155-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20211117123155-2067 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I1117 12:31:56.528528   17571 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:31:56.555276   17571 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:31:56.555463   17571 start.go:160] libmachine.API.Create for "old-k8s-version-20211117123155-2067" (driver="docker")
	I1117 12:31:56.555488   17571 client.go:168] LocalClient.Create starting
	I1117 12:31:56.555594   17571 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:31:56.577013   17571 main.go:130] libmachine: Decoding PEM data...
	I1117 12:31:56.577062   17571 main.go:130] libmachine: Parsing certificate...
	I1117 12:31:56.577200   17571 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:31:56.577284   17571 main.go:130] libmachine: Decoding PEM data...
	I1117 12:31:56.577306   17571 main.go:130] libmachine: Parsing certificate...
	I1117 12:31:56.578289   17571 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:31:56.680333   17571 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:31:56.680455   17571 network_create.go:254] running [docker network inspect old-k8s-version-20211117123155-2067] to gather additional debugging logs...
	I1117 12:31:56.680477   17571 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067
	W1117 12:31:56.781384   17571 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:31:56.781417   17571 network_create.go:257] error running [docker network inspect old-k8s-version-20211117123155-2067]: docker network inspect old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211117123155-2067
	I1117 12:31:56.781432   17571 network_create.go:259] output of [docker network inspect old-k8s-version-20211117123155-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211117123155-2067
	
	** /stderr **
	I1117 12:31:56.781517   17571 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:31:56.883031   17571 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000162170] misses:0}
	I1117 12:31:56.883066   17571 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:31:56.883086   17571 network_create.go:106] attempt to create docker network old-k8s-version-20211117123155-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:31:56.883164   17571 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067
	I1117 12:32:02.266813   17571 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067: (5.383704748s)
	I1117 12:32:02.266836   17571 network_create.go:90] docker network old-k8s-version-20211117123155-2067 192.168.49.0/24 created
	I1117 12:32:02.266855   17571 kic.go:106] calculated static IP "192.168.49.2" for the "old-k8s-version-20211117123155-2067" container
	I1117 12:32:02.266974   17571 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:32:02.365351   17571 cli_runner.go:115] Run: docker volume create old-k8s-version-20211117123155-2067 --label name.minikube.sigs.k8s.io=old-k8s-version-20211117123155-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:32:02.467541   17571 oci.go:102] Successfully created a docker volume old-k8s-version-20211117123155-2067
	I1117 12:32:02.467663   17571 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20211117123155-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117123155-2067 --entrypoint /usr/bin/test -v old-k8s-version-20211117123155-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:32:02.957136   17571 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211117123155-2067
	E1117 12:32:02.957196   17571 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:32:02.957200   17571 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:32:02.957219   17571 client.go:171] LocalClient.Create took 6.401849494s
	I1117 12:32:02.957227   17571 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:32:02.957342   17571 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117123155-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:32:04.961329   17571 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:32:04.961492   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:05.112336   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:05.112426   17571 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:05.389194   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:05.515335   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:05.515412   17571 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:06.061104   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:06.189887   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:06.189963   17571 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:06.850539   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:06.969561   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	W1117 12:32:06.969646   17571 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:32:06.969668   17571 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:06.969679   17571 start.go:129] duration metric: createHost completed in 10.441337701s
	I1117 12:32:06.969685   17571 start.go:80] releasing machines lock for "old-k8s-version-20211117123155-2067", held for 10.441438224s
	W1117 12:32:06.969700   17571 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:32:06.970168   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:07.094377   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:07.094446   17571 delete.go:82] Unable to get host status for old-k8s-version-20211117123155-2067, assuming it has already been deleted: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	W1117 12:32:07.094645   17571 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:32:07.094662   17571 start.go:547] Will try again in 5 seconds ...
	I1117 12:32:08.498105   17571 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117123155-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.540795467s)
	I1117 12:32:08.498126   17571 kic.go:188] duration metric: took 5.540989 seconds to extract preloaded images to volume
	I1117 12:32:12.095046   17571 start.go:313] acquiring machines lock for old-k8s-version-20211117123155-2067: {Name:mkdcdd296c413d69b1c0c600c9bbeca63dadcf75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:12.095132   17571 start.go:317] acquired machines lock for "old-k8s-version-20211117123155-2067" in 65.525µs
	I1117 12:32:12.095154   17571 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:32:12.095162   17571 fix.go:55] fixHost starting: 
	I1117 12:32:12.095423   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:12.218458   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:12.218500   17571 fix.go:108] recreateIfNeeded on old-k8s-version-20211117123155-2067: state= err=unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:12.218519   17571 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:32:12.245441   17571 out.go:176] * docker "old-k8s-version-20211117123155-2067" container is missing, will recreate.
	I1117 12:32:12.245457   17571 delete.go:124] DEMOLISHING old-k8s-version-20211117123155-2067 ...
	I1117 12:32:12.245574   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:12.364653   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:12.364695   17571 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:12.364708   17571 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:12.365119   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:12.484272   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:12.484313   17571 delete.go:82] Unable to get host status for old-k8s-version-20211117123155-2067, assuming it has already been deleted: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:12.484410   17571 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067
	W1117 12:32:12.616320   17571 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:12.616352   17571 kic.go:360] could not find the container old-k8s-version-20211117123155-2067 to remove it. will try anyways
	I1117 12:32:12.616456   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:12.732824   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:12.732873   17571 oci.go:83] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:12.732996   17571 cli_runner.go:115] Run: docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0"
	W1117 12:32:12.854793   17571 cli_runner.go:162] docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:32:12.854818   17571 oci.go:656] error shutdown old-k8s-version-20211117123155-2067: docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:13.861013   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:13.991484   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:13.991548   17571 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:13.991588   17571 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:32:13.991619   17571 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:14.461428   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:14.588314   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:14.588364   17571 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:14.588376   17571 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:32:14.588409   17571 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:15.485945   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:15.588869   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:15.588908   17571 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:15.588916   17571 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:32:15.588937   17571 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:16.225418   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:16.335390   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:16.335429   17571 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:16.335438   17571 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:32:16.335461   17571 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:17.448962   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:17.550292   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:17.550331   17571 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:17.550340   17571 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:32:17.550361   17571 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:19.069507   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:19.170031   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:19.170068   17571 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:19.170076   17571 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:32:19.170096   17571 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:22.219374   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:22.318100   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:22.318142   17571 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:22.318167   17571 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:32:22.318193   17571 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:28.102743   17571 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:28.201621   17571 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:28.201661   17571 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:28.201671   17571 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:32:28.201704   17571 oci.go:87] couldn't shut down old-k8s-version-20211117123155-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	 
	I1117 12:32:28.201794   17571 cli_runner.go:115] Run: docker rm -f -v old-k8s-version-20211117123155-2067
	I1117 12:32:28.300678   17571 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067
	W1117 12:32:28.400710   17571 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:28.400821   17571 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:32:28.499438   17571 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:32:28.499528   17571 network_create.go:254] running [docker network inspect old-k8s-version-20211117123155-2067] to gather additional debugging logs...
	I1117 12:32:28.499543   17571 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067
	W1117 12:32:28.597554   17571 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:28.597582   17571 network_create.go:257] error running [docker network inspect old-k8s-version-20211117123155-2067]: docker network inspect old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211117123155-2067
	I1117 12:32:28.597594   17571 network_create.go:259] output of [docker network inspect old-k8s-version-20211117123155-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211117123155-2067
	
	** /stderr **
	W1117 12:32:28.597871   17571 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:32:28.597877   17571 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:32:29.598718   17571 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:32:29.646094   17571 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:32:29.646187   17571 start.go:160] libmachine.API.Create for "old-k8s-version-20211117123155-2067" (driver="docker")
	I1117 12:32:29.646205   17571 client.go:168] LocalClient.Create starting
	I1117 12:32:29.646295   17571 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:32:29.646333   17571 main.go:130] libmachine: Decoding PEM data...
	I1117 12:32:29.646347   17571 main.go:130] libmachine: Parsing certificate...
	I1117 12:32:29.646390   17571 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:32:29.646416   17571 main.go:130] libmachine: Decoding PEM data...
	I1117 12:32:29.646430   17571 main.go:130] libmachine: Parsing certificate...
	I1117 12:32:29.646767   17571 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:32:29.747776   17571 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:32:29.747868   17571 network_create.go:254] running [docker network inspect old-k8s-version-20211117123155-2067] to gather additional debugging logs...
	I1117 12:32:29.747884   17571 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067
	W1117 12:32:29.848586   17571 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:29.848633   17571 network_create.go:257] error running [docker network inspect old-k8s-version-20211117123155-2067]: docker network inspect old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211117123155-2067
	I1117 12:32:29.848654   17571 network_create.go:259] output of [docker network inspect old-k8s-version-20211117123155-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211117123155-2067
	
	** /stderr **
	I1117 12:32:29.848757   17571 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:32:29.947744   17571 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000162170] amended:false}} dirty:map[] misses:0}
	I1117 12:32:29.947776   17571 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:32:29.947946   17571 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000162170] amended:true}} dirty:map[192.168.49.0:0xc000162170 192.168.58.0:0xc0007521a8] misses:0}
	I1117 12:32:29.947964   17571 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:32:29.947973   17571 network_create.go:106] attempt to create docker network old-k8s-version-20211117123155-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:32:29.948053   17571 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067
	I1117 12:32:35.765352   17571 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067: (5.817305419s)
	I1117 12:32:35.765383   17571 network_create.go:90] docker network old-k8s-version-20211117123155-2067 192.168.58.0/24 created
	I1117 12:32:35.765400   17571 kic.go:106] calculated static IP "192.168.58.2" for the "old-k8s-version-20211117123155-2067" container
	I1117 12:32:35.765512   17571 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:32:35.864323   17571 cli_runner.go:115] Run: docker volume create old-k8s-version-20211117123155-2067 --label name.minikube.sigs.k8s.io=old-k8s-version-20211117123155-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:32:35.963983   17571 oci.go:102] Successfully created a docker volume old-k8s-version-20211117123155-2067
	I1117 12:32:35.964135   17571 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20211117123155-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117123155-2067 --entrypoint /usr/bin/test -v old-k8s-version-20211117123155-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:32:36.364862   17571 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211117123155-2067
	E1117 12:32:36.364918   17571 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:32:36.364920   17571 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:32:36.364929   17571 client.go:171] LocalClient.Create took 6.718788917s
	I1117 12:32:36.364941   17571 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:32:36.365065   17571 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117123155-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:32:38.370491   17571 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:32:38.370646   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:38.510718   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:38.510814   17571 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:38.692231   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:38.823512   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:38.823601   17571 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:39.160113   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:39.290736   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:39.290847   17571 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:39.759749   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:39.875969   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	W1117 12:32:39.876061   17571 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:32:39.876086   17571 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:39.876099   17571 start.go:129] duration metric: createHost completed in 10.277464594s
	I1117 12:32:39.877339   17571 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:32:39.877406   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:39.976920   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:39.976993   17571 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:40.175629   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:40.275043   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:40.275119   17571 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:40.573970   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:40.697142   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:32:40.697215   17571 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:41.368674   17571 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:32:41.507667   17571 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	W1117 12:32:41.507748   17571 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:32:41.507765   17571 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:41.507779   17571 fix.go:57] fixHost completed within 29.412944894s
	I1117 12:32:41.507789   17571 start.go:80] releasing machines lock for "old-k8s-version-20211117123155-2067", held for 29.412978376s
	W1117 12:32:41.507947   17571 out.go:241] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20211117123155-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20211117123155-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:32:41.555384   17571 out.go:176] 
	W1117 12:32:41.555501   17571 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:32:41.555515   17571 out.go:241] * 
	* 
	W1117 12:32:41.556278   17571 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:32:41.633342   17571 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20211117123155-2067 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d7ca1c95263768ec991b163c4ec05184896d56313ab8bd525bbeb9d87f5c3377",
	        "Created": "2021-11-17T20:32:30.054009903Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (182.724132ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:32:41.974563   18039 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (46.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20211117123224-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-20211117123224-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 80 (49.093102362s)

                                                
                                                
-- stdout --
	* [no-preload-20211117123224-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node no-preload-20211117123224-2067 in cluster no-preload-20211117123224-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20211117123224-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:32:24.563395   17848 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:32:24.563536   17848 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:32:24.563541   17848 out.go:310] Setting ErrFile to fd 2...
	I1117 12:32:24.563544   17848 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:32:24.563623   17848 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:32:24.563930   17848 out.go:304] Setting JSON to false
	I1117 12:32:24.588952   17848 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3719,"bootTime":1637177425,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:32:24.589047   17848 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:32:24.616320   17848 out.go:176] * [no-preload-20211117123224-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:32:24.616516   17848 notify.go:174] Checking for updates...
	I1117 12:32:24.663294   17848 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:32:24.689237   17848 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:32:24.715258   17848 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:32:24.740990   17848 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:32:24.741393   17848 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:32:24.741478   17848 config.go:176] Loaded profile config "old-k8s-version-20211117123155-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 12:32:24.741513   17848 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:32:24.828329   17848 docker.go:132] docker version: linux-20.10.5
	I1117 12:32:24.828470   17848 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:32:24.977311   17848 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:32:24.930183314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:32:25.025793   17848 out.go:176] * Using the docker driver based on user configuration
	I1117 12:32:25.025843   17848 start.go:280] selected driver: docker
	I1117 12:32:25.025858   17848 start.go:775] validating driver "docker" against <nil>
	I1117 12:32:25.025901   17848 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:32:25.029235   17848 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:32:25.180047   17848 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:32:25.131635091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:32:25.180139   17848 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:32:25.180270   17848 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:32:25.180287   17848 cni.go:93] Creating CNI manager for ""
	I1117 12:32:25.180294   17848 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:32:25.180300   17848 start_flags.go:282] config:
	{Name:no-preload-20211117123224-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:no-preload-20211117123224-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:32:25.228958   17848 out.go:176] * Starting control plane node no-preload-20211117123224-2067 in cluster no-preload-20211117123224-2067
	I1117 12:32:25.229062   17848 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:32:25.254816   17848 out.go:176] * Pulling base image ...
	I1117 12:32:25.254900   17848 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:32:25.254950   17848 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:32:25.255112   17848 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/no-preload-20211117123224-2067/config.json ...
	I1117 12:32:25.255205   17848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/no-preload-20211117123224-2067/config.json: {Name:mka74809efdf1f41b3a315865af9c61e03b026d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:32:25.255268   17848 cache.go:107] acquiring lock: {Name:mk484f4aa10be29d59ecef162cc3ba4ef356bc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.255269   17848 cache.go:107] acquiring lock: {Name:mk46c2aac0c807364b7b6718b28e798e38331a44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.256850   17848 cache.go:107] acquiring lock: {Name:mk51daa56a24576eb68d57c222971a7123f25c24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.256969   17848 cache.go:107] acquiring lock: {Name:mkfdfbbae55ac5b96e9234058d2251140315481d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.257102   17848 cache.go:107] acquiring lock: {Name:mk8510e8d29ffb1d7afc63ac2448ba0a514946b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.257205   17848 cache.go:107] acquiring lock: {Name:mk1cf5798a7a6d25ea3a3811b697e938466510b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.257231   17848 cache.go:107] acquiring lock: {Name:mke1ba390537bac8e8cb13b8ad3c21b706e43051 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.257361   17848 cache.go:107] acquiring lock: {Name:mkdf67b1af8680e831a8cb6a6b59deeb701a2c60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.257889   17848 cache.go:107] acquiring lock: {Name:mk45dbae0c82aa7e4329337c39882df418aeab32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.258204   17848 cache.go:107] acquiring lock: {Name:mkc38557d3f08ef749cdb79439f2e56bd72f6169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.258326   17848 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 exists
	I1117 12:32:25.258200   17848 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I1117 12:32:25.258347   17848 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 exists
	I1117 12:32:25.258345   17848 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4" took 1.35622ms
	I1117 12:32:25.258371   17848 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I1117 12:32:25.258362   17848 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.5 exists
	I1117 12:32:25.258357   17848 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 3.109461ms
	I1117 12:32:25.258387   17848 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 succeeded
	I1117 12:32:25.258383   17848 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0" took 1.228068ms
	I1117 12:32:25.258399   17848 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1117 12:32:25.258415   17848 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I1117 12:32:25.258415   17848 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 1.611456ms
	I1117 12:32:25.258424   17848 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 succeeded
	I1117 12:32:25.258440   17848 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I1117 12:32:25.258430   17848 cache.go:96] cache image "k8s.gcr.io/pause:3.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.5" took 3.047117ms
	I1117 12:32:25.258452   17848 cache.go:80] save to tar file k8s.gcr.io/pause:3.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.5 succeeded
	I1117 12:32:25.258447   17848 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.17548ms
	I1117 12:32:25.258473   17848 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1117 12:32:25.258520   17848 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.4-rc.0
	I1117 12:32:25.258615   17848 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.22.4-rc.0
	I1117 12:32:25.258657   17848 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0
	I1117 12:32:25.258697   17848 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.4-rc.0
	I1117 12:32:25.260036   17848 image.go:176] found k8s.gcr.io/kube-proxy:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-proxy:v1.22.4-rc.0} opener:0xc0000de150 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:32:25.260073   17848 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.4-rc.0
	I1117 12:32:25.260454   17848 image.go:176] found k8s.gcr.io/kube-scheduler:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-scheduler:v1.22.4-rc.0} opener:0xc00042a070 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:32:25.260509   17848 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.4-rc.0
	I1117 12:32:25.260752   17848 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0} opener:0xc0000de310 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:32:25.260774   17848 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.4-rc.0
	I1117 12:32:25.261557   17848 image.go:176] found k8s.gcr.io/kube-apiserver:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-apiserver:v1.22.4-rc.0} opener:0xc00042a150 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:32:25.261571   17848 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.4-rc.0
	I1117 12:32:25.262273   17848 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.22.4-rc.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.4-rc.0" took 5.5033ms
	I1117 12:32:25.262425   17848 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.22.4-rc.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.4-rc.0" took 7.014916ms
	I1117 12:32:25.263458   17848 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.22.4-rc.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.4-rc.0" took 8.245733ms
	I1117 12:32:25.263526   17848 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.4-rc.0" took 6.685531ms
	I1117 12:32:25.374555   17848 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:32:25.374577   17848 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:32:25.374588   17848 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:32:25.374621   17848 start.go:313] acquiring machines lock for no-preload-20211117123224-2067: {Name:mk30ecdb69a16cf786227c9355857466145cadb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:25.375390   17848 start.go:317] acquired machines lock for "no-preload-20211117123224-2067" in 755.546µs
	I1117 12:32:25.375421   17848 start.go:89] Provisioning new machine with config: &{Name:no-preload-20211117123224-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:no-preload-20211117123224-2067 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}
	I1117 12:32:25.375480   17848 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:32:25.422935   17848 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:32:25.423264   17848 start.go:160] libmachine.API.Create for "no-preload-20211117123224-2067" (driver="docker")
	I1117 12:32:25.423306   17848 client.go:168] LocalClient.Create starting
	I1117 12:32:25.423452   17848 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:32:25.423522   17848 main.go:130] libmachine: Decoding PEM data...
	I1117 12:32:25.423561   17848 main.go:130] libmachine: Parsing certificate...
	I1117 12:32:25.423674   17848 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:32:25.423727   17848 main.go:130] libmachine: Decoding PEM data...
	I1117 12:32:25.423748   17848 main.go:130] libmachine: Parsing certificate...
	I1117 12:32:25.424757   17848 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:32:25.526704   17848 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:32:25.526805   17848 network_create.go:254] running [docker network inspect no-preload-20211117123224-2067] to gather additional debugging logs...
	I1117 12:32:25.526821   17848 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067
	W1117 12:32:25.631894   17848 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:32:25.631916   17848 network_create.go:257] error running [docker network inspect no-preload-20211117123224-2067]: docker network inspect no-preload-20211117123224-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20211117123224-2067
	I1117 12:32:25.631933   17848 network_create.go:259] output of [docker network inspect no-preload-20211117123224-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20211117123224-2067
	
	** /stderr **
	I1117 12:32:25.632019   17848 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:32:25.747441   17848 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00100a0c0] misses:0}
	I1117 12:32:25.747478   17848 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:32:25.747498   17848 network_create.go:106] attempt to create docker network no-preload-20211117123224-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:32:25.747571   17848 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067
	I1117 12:32:30.991960   17848 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067: (5.24441087s)
	I1117 12:32:30.991996   17848 network_create.go:90] docker network no-preload-20211117123224-2067 192.168.49.0/24 created
	I1117 12:32:30.992017   17848 kic.go:106] calculated static IP "192.168.49.2" for the "no-preload-20211117123224-2067" container
	I1117 12:32:30.992130   17848 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:32:31.091703   17848 cli_runner.go:115] Run: docker volume create no-preload-20211117123224-2067 --label name.minikube.sigs.k8s.io=no-preload-20211117123224-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:32:31.192553   17848 oci.go:102] Successfully created a docker volume no-preload-20211117123224-2067
	I1117 12:32:31.192696   17848 cli_runner.go:115] Run: docker run --rm --name no-preload-20211117123224-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117123224-2067 --entrypoint /usr/bin/test -v no-preload-20211117123224-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:32:31.656205   17848 oci.go:106] Successfully prepared a docker volume no-preload-20211117123224-2067
	E1117 12:32:31.656249   17848 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:32:31.656265   17848 client.go:171] LocalClient.Create took 6.233016851s
	I1117 12:32:31.656258   17848 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:32:33.660735   17848 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:32:33.660812   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:32:33.758042   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:32:33.758122   17848 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:34.037629   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:32:34.135546   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:32:34.135623   17848 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:34.685902   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:32:34.784581   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:32:34.784658   17848 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:35.440000   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:32:35.539265   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:32:35.539337   17848 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:32:35.539351   17848 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:35.539361   17848 start.go:129] duration metric: createHost completed in 10.16398223s
	I1117 12:32:35.539367   17848 start.go:80] releasing machines lock for "no-preload-20211117123224-2067", held for 10.164075379s
	W1117 12:32:35.539381   17848 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:32:35.539805   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:35.639537   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:35.639579   17848 delete.go:82] Unable to get host status for no-preload-20211117123224-2067, assuming it has already been deleted: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	W1117 12:32:35.639720   17848 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:32:35.639731   17848 start.go:547] Will try again in 5 seconds ...
	I1117 12:32:40.642802   17848 start.go:313] acquiring machines lock for no-preload-20211117123224-2067: {Name:mk30ecdb69a16cf786227c9355857466145cadb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:40.642888   17848 start.go:317] acquired machines lock for "no-preload-20211117123224-2067" in 69.132µs
	I1117 12:32:40.642917   17848 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:32:40.642925   17848 fix.go:55] fixHost starting: 
	I1117 12:32:40.643168   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:40.747930   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:40.747982   17848 fix.go:108] recreateIfNeeded on no-preload-20211117123224-2067: state= err=unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:40.748004   17848 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:32:40.776647   17848 out.go:176] * docker "no-preload-20211117123224-2067" container is missing, will recreate.
	I1117 12:32:40.776663   17848 delete.go:124] DEMOLISHING no-preload-20211117123224-2067 ...
	I1117 12:32:40.776781   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:40.897186   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:40.897240   17848 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:40.897262   17848 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:40.897713   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:41.016248   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:41.016311   17848 delete.go:82] Unable to get host status for no-preload-20211117123224-2067, assuming it has already been deleted: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:41.016469   17848 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117123224-2067
	W1117 12:32:41.134456   17848 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:32:41.134484   17848 kic.go:360] could not find the container no-preload-20211117123224-2067 to remove it. will try anyways
	I1117 12:32:41.134576   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:41.255994   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:41.256043   17848 oci.go:83] error getting container status, will try to delete anyways: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:41.256159   17848 cli_runner.go:115] Run: docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0"
	W1117 12:32:41.374038   17848 cli_runner.go:162] docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:32:41.374077   17848 oci.go:656] error shutdown no-preload-20211117123224-2067: docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:42.376283   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:42.502987   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:42.503040   17848 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:42.503054   17848 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:32:42.503080   17848 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:42.969180   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:43.080907   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:43.080952   17848 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:43.080966   17848 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:32:43.080992   17848 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:43.974420   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:44.079474   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:44.079515   17848 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:44.079524   17848 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:32:44.079544   17848 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:44.719286   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:44.820367   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:44.820413   17848 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:44.820424   17848 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:32:44.820450   17848 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:45.935845   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:46.041120   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:46.041160   17848 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:46.041169   17848 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:32:46.041195   17848 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:47.560685   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:47.665097   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:47.665136   17848 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:47.665145   17848 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:32:47.665164   17848 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:50.710923   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:50.817229   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:50.817270   17848 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:50.817288   17848 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:32:50.817311   17848 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:56.606937   17848 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:32:56.711055   17848 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:56.711095   17848 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:32:56.711105   17848 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:32:56.711141   17848 oci.go:87] couldn't shut down no-preload-20211117123224-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	 
	I1117 12:32:56.711220   17848 cli_runner.go:115] Run: docker rm -f -v no-preload-20211117123224-2067
	I1117 12:32:56.812181   17848 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117123224-2067
	W1117 12:32:56.911809   17848 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:32:56.911926   17848 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:32:57.011763   17848 cli_runner.go:115] Run: docker network rm no-preload-20211117123224-2067
	I1117 12:33:01.263114   17848 cli_runner.go:168] Completed: docker network rm no-preload-20211117123224-2067: (4.251332827s)
	W1117 12:33:01.263833   17848 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:33:01.263841   17848 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:33:02.273990   17848 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:33:02.301609   17848 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:33:02.301790   17848 start.go:160] libmachine.API.Create for "no-preload-20211117123224-2067" (driver="docker")
	I1117 12:33:02.301827   17848 client.go:168] LocalClient.Create starting
	I1117 12:33:02.302044   17848 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:33:02.302131   17848 main.go:130] libmachine: Decoding PEM data...
	I1117 12:33:02.302176   17848 main.go:130] libmachine: Parsing certificate...
	I1117 12:33:02.302292   17848 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:33:02.302361   17848 main.go:130] libmachine: Decoding PEM data...
	I1117 12:33:02.302388   17848 main.go:130] libmachine: Parsing certificate...
	I1117 12:33:02.323832   17848 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:33:02.425783   17848 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:33:02.425899   17848 network_create.go:254] running [docker network inspect no-preload-20211117123224-2067] to gather additional debugging logs...
	I1117 12:33:02.425914   17848 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067
	W1117 12:33:02.526057   17848 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:02.526081   17848 network_create.go:257] error running [docker network inspect no-preload-20211117123224-2067]: docker network inspect no-preload-20211117123224-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20211117123224-2067
	I1117 12:33:02.526093   17848 network_create.go:259] output of [docker network inspect no-preload-20211117123224-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20211117123224-2067
	
	** /stderr **
	I1117 12:33:02.526183   17848 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:33:02.626334   17848 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00100a0c0] amended:false}} dirty:map[] misses:0}
	I1117 12:33:02.626366   17848 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:33:02.626553   17848 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00100a0c0] amended:true}} dirty:map[192.168.49.0:0xc00100a0c0 192.168.58.0:0xc00081c198] misses:0}
	I1117 12:33:02.626565   17848 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:33:02.626572   17848 network_create.go:106] attempt to create docker network no-preload-20211117123224-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:33:02.626658   17848 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067
	W1117 12:33:02.726831   17848 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:33:02.726876   17848 network_create.go:98] failed to create docker network no-preload-20211117123224-2067 192.168.58.0/24, will retry: subnet is taken
	I1117 12:33:02.727105   17848 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00100a0c0] amended:true}} dirty:map[192.168.49.0:0xc00100a0c0 192.168.58.0:0xc00081c198] misses:1}
	I1117 12:33:02.727122   17848 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:33:02.727294   17848 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00100a0c0] amended:true}} dirty:map[192.168.49.0:0xc00100a0c0 192.168.58.0:0xc00081c198 192.168.67.0:0xc00000edb8] misses:1}
	I1117 12:33:02.727304   17848 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:33:02.727312   17848 network_create.go:106] attempt to create docker network no-preload-20211117123224-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:33:02.727392   17848 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067
	I1117 12:33:07.895303   17848 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067: (5.167867445s)
	I1117 12:33:07.895338   17848 network_create.go:90] docker network no-preload-20211117123224-2067 192.168.67.0/24 created
	I1117 12:33:07.895352   17848 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20211117123224-2067" container
	I1117 12:33:07.895462   17848 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:33:07.995431   17848 cli_runner.go:115] Run: docker volume create no-preload-20211117123224-2067 --label name.minikube.sigs.k8s.io=no-preload-20211117123224-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:33:08.095094   17848 oci.go:102] Successfully created a docker volume no-preload-20211117123224-2067
	I1117 12:33:08.095212   17848 cli_runner.go:115] Run: docker run --rm --name no-preload-20211117123224-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117123224-2067 --entrypoint /usr/bin/test -v no-preload-20211117123224-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:33:08.487530   17848 oci.go:106] Successfully prepared a docker volume no-preload-20211117123224-2067
	E1117 12:33:08.487580   17848 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:33:08.487587   17848 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:33:08.487597   17848 client.go:171] LocalClient.Create took 6.185818465s
	I1117 12:33:10.492640   17848 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:33:10.492789   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:33:10.596384   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:10.596479   17848 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:10.775322   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:33:10.878180   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:10.878267   17848 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:11.217321   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:33:11.318566   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:11.318643   17848 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:11.781963   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:33:11.886162   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:33:11.886260   17848 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:33:11.886282   17848 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:11.886296   17848 start.go:129] duration metric: createHost completed in 9.612314965s
	I1117 12:33:11.886365   17848 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:33:11.886429   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:33:11.986522   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:11.986601   17848 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:12.192708   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:33:12.296778   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:12.296854   17848 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:12.599513   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:33:12.699663   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:12.699740   17848 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:13.370857   17848 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:33:13.480394   17848 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:33:13.480483   17848 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:33:13.480498   17848 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:13.480509   17848 fix.go:57] fixHost completed within 32.837892049s
	I1117 12:33:13.480522   17848 start.go:80] releasing machines lock for "no-preload-20211117123224-2067", held for 32.837934888s
	W1117 12:33:13.480706   17848 out.go:241] * Failed to start docker container. Running "minikube delete -p no-preload-20211117123224-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p no-preload-20211117123224-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:33:13.528218   17848 out.go:176] 
	W1117 12:33:13.528474   17848 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:33:13.528492   17848 out.go:241] * 
	* 
	W1117 12:33:13.529816   17848 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:33:13.613393   17848 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p no-preload-20211117123224-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "7e70d2b6de16a9bfbf623591d0a58e26ebaef43f13701a504004c90e58b0cb73",
	        "Created": "2021-11-17T20:33:02.835448625Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (159.713377ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:33:13.890243   18325 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (49.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211117123155-2067 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117123155-2067 create -f testdata/busybox.yaml: exit status 1 (51.640091ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117123155-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context old-k8s-version-20211117123155-2067 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d7ca1c95263768ec991b163c4ec05184896d56313ab8bd525bbeb9d87f5c3377",
	        "Created": "2021-11-17T20:32:30.054009903Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (184.967196ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:32:42.342610   18049 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d7ca1c95263768ec991b163c4ec05184896d56313ab8bd525bbeb9d87f5c3377",
	        "Created": "2021-11-17T20:32:30.054009903Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (167.903447ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:32:42.642138   18062 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20211117123155-2067 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20211117123155-2067 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117123155-2067 describe deploy/metrics-server -n kube-system: exit status 1 (40.048377ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117123155-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20211117123155-2067 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d7ca1c95263768ec991b163c4ec05184896d56313ab8bd525bbeb9d87f5c3377",
	        "Created": "2021-11-17T20:32:30.054009903Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (147.315505ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:32:43.190385   18080 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20211117123155-2067 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p old-k8s-version-20211117123155-2067 --alsologtostderr -v=3: exit status 82 (14.738890236s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-20211117123155-2067"  ...
	* Stopping node "old-k8s-version-20211117123155-2067"  ...
	* Stopping node "old-k8s-version-20211117123155-2067"  ...
	* Stopping node "old-k8s-version-20211117123155-2067"  ...
	* Stopping node "old-k8s-version-20211117123155-2067"  ...
	* Stopping node "old-k8s-version-20211117123155-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:32:43.231732   18086 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:32:43.232458   18086 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:32:43.232464   18086 out.go:310] Setting ErrFile to fd 2...
	I1117 12:32:43.232467   18086 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:32:43.232543   18086 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:32:43.232709   18086 out.go:304] Setting JSON to false
	I1117 12:32:43.232861   18086 mustload.go:65] Loading cluster: old-k8s-version-20211117123155-2067
	I1117 12:32:43.233087   18086 config.go:176] Loaded profile config "old-k8s-version-20211117123155-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 12:32:43.233131   18086 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/old-k8s-version-20211117123155-2067/config.json ...
	I1117 12:32:43.233422   18086 mustload.go:65] Loading cluster: old-k8s-version-20211117123155-2067
	I1117 12:32:43.233515   18086 config.go:176] Loaded profile config "old-k8s-version-20211117123155-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 12:32:43.233549   18086 stop.go:39] StopHost: old-k8s-version-20211117123155-2067
	I1117 12:32:43.260388   18086 out.go:176] * Stopping node "old-k8s-version-20211117123155-2067"  ...
	I1117 12:32:43.260670   18086 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:43.364992   18086 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:43.365055   18086 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	W1117 12:32:43.365078   18086 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:43.365103   18086 retry.go:31] will retry after 1.104660288s: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:44.471672   18086 stop.go:39] StopHost: old-k8s-version-20211117123155-2067
	I1117 12:32:44.499153   18086 out.go:176] * Stopping node "old-k8s-version-20211117123155-2067"  ...
	I1117 12:32:44.499383   18086 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:44.617988   18086 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:44.618023   18086 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	W1117 12:32:44.618040   18086 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:44.618056   18086 retry.go:31] will retry after 2.160763633s: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:46.786236   18086 stop.go:39] StopHost: old-k8s-version-20211117123155-2067
	I1117 12:32:46.813323   18086 out.go:176] * Stopping node "old-k8s-version-20211117123155-2067"  ...
	I1117 12:32:46.813552   18086 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:46.916472   18086 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:46.916511   18086 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	W1117 12:32:46.916529   18086 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:46.917062   18086 retry.go:31] will retry after 2.62026012s: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:49.547493   18086 stop.go:39] StopHost: old-k8s-version-20211117123155-2067
	I1117 12:32:49.584174   18086 out.go:176] * Stopping node "old-k8s-version-20211117123155-2067"  ...
	I1117 12:32:49.584426   18086 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:49.688024   18086 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:49.688071   18086 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	W1117 12:32:49.688091   18086 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:49.688110   18086 retry.go:31] will retry after 3.164785382s: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:52.859527   18086 stop.go:39] StopHost: old-k8s-version-20211117123155-2067
	I1117 12:32:52.887195   18086 out.go:176] * Stopping node "old-k8s-version-20211117123155-2067"  ...
	I1117 12:32:52.887396   18086 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:52.990866   18086 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:52.990912   18086 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	W1117 12:32:52.990935   18086 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:52.990954   18086 retry.go:31] will retry after 4.680977329s: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:57.675397   18086 stop.go:39] StopHost: old-k8s-version-20211117123155-2067
	I1117 12:32:57.703045   18086 out.go:176] * Stopping node "old-k8s-version-20211117123155-2067"  ...
	I1117 12:32:57.703285   18086 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:57.803682   18086 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:57.803726   18086 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	W1117 12:32:57.803747   18086 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:57.830505   18086 out.go:176] 
	W1117 12:32:57.830626   18086 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20211117123155-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-20211117123155-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:32:57.830635   18086 out.go:241] * 
	* 
	W1117 12:32:57.833151   18086 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:32:57.909170   18086 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p old-k8s-version-20211117123155-2067 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d7ca1c95263768ec991b163c4ec05184896d56313ab8bd525bbeb9d87f5c3377",
	        "Created": "2021-11-17T20:32:30.054009903Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (143.169923ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:32:58.181114   18155 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (14.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (144.092391ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:32:58.325407   18160 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20211117123155-2067 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d7ca1c95263768ec991b163c4ec05184896d56313ab8bd525bbeb9d87f5c3377",
	        "Created": "2021-11-17T20:32:30.054009903Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (144.523855ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:32:58.799914   18174 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (77.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20211117123155-2067 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20211117123155-2067 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0: exit status 80 (1m16.992794582s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211117123155-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20211117123155-2067 in cluster old-k8s-version-20211117123155-2067
	* Pulling base image ...
	* docker "old-k8s-version-20211117123155-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-20211117123155-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:32:58.842224   18179 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:32:58.842360   18179 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:32:58.842365   18179 out.go:310] Setting ErrFile to fd 2...
	I1117 12:32:58.842368   18179 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:32:58.842441   18179 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:32:58.842705   18179 out.go:304] Setting JSON to false
	I1117 12:32:58.871154   18179 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3753,"bootTime":1637177425,"procs":323,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:32:58.871247   18179 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:32:58.897707   18179 out.go:176] * [old-k8s-version-20211117123155-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:32:58.897786   18179 notify.go:174] Checking for updates...
	I1117 12:32:58.950390   18179 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:32:58.976453   18179 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:32:59.002236   18179 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:32:59.028489   18179 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:32:59.028872   18179 config.go:176] Loaded profile config "old-k8s-version-20211117123155-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 12:32:59.055383   18179 out.go:176] * Kubernetes 1.22.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.22.3
	I1117 12:32:59.055406   18179 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:32:59.142932   18179 docker.go:132] docker version: linux-20.10.5
	I1117 12:32:59.143101   18179 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:32:59.297086   18179 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2021-11-17 20:32:59.250000944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:32:59.344597   18179 out.go:176] * Using the docker driver based on existing profile
	I1117 12:32:59.344632   18179 start.go:280] selected driver: docker
	I1117 12:32:59.344644   18179 start.go:775] validating driver "docker" against &{Name:old-k8s-version-20211117123155-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20211117123155-2067 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/Users:/minikube-host}
	I1117 12:32:59.344774   18179 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:32:59.348047   18179 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:32:59.498975   18179 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:49 SystemTime:2021-11-17 20:32:59.452886123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:32:59.499112   18179 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:32:59.499132   18179 cni.go:93] Creating CNI manager for ""
	I1117 12:32:59.499139   18179 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:32:59.499158   18179 start_flags.go:282] config:
	{Name:old-k8s-version-20211117123155-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:old-k8s-version-20211117123155-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:32:59.546787   18179 out.go:176] * Starting control plane node old-k8s-version-20211117123155-2067 in cluster old-k8s-version-20211117123155-2067
	I1117 12:32:59.546821   18179 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:32:59.614777   18179 out.go:176] * Pulling base image ...
	I1117 12:32:59.614821   18179 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:32:59.614849   18179 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:32:59.614875   18179 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 12:32:59.614888   18179 cache.go:57] Caching tarball of preloaded images
	I1117 12:32:59.615015   18179 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:32:59.615029   18179 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I1117 12:32:59.615564   18179 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/old-k8s-version-20211117123155-2067/config.json ...
	I1117 12:32:59.728412   18179 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:32:59.728427   18179 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:32:59.728439   18179 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:32:59.728482   18179 start.go:313] acquiring machines lock for old-k8s-version-20211117123155-2067: {Name:mkdcdd296c413d69b1c0c600c9bbeca63dadcf75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:32:59.728575   18179 start.go:317] acquired machines lock for "old-k8s-version-20211117123155-2067" in 68.936µs
	I1117 12:32:59.728597   18179 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:32:59.728606   18179 fix.go:55] fixHost starting: 
	I1117 12:32:59.728876   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:59.829374   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:32:59.829444   18179 fix.go:108] recreateIfNeeded on old-k8s-version-20211117123155-2067: state= err=unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:59.829479   18179 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:32:59.877925   18179 out.go:176] * docker "old-k8s-version-20211117123155-2067" container is missing, will recreate.
	I1117 12:32:59.877957   18179 delete.go:124] DEMOLISHING old-k8s-version-20211117123155-2067 ...
	I1117 12:32:59.878088   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:32:59.978209   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:32:59.978250   18179 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:59.978267   18179 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:32:59.978678   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:00.079926   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:00.079973   18179 delete.go:82] Unable to get host status for old-k8s-version-20211117123155-2067, assuming it has already been deleted: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:00.080079   18179 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067
	W1117 12:33:00.179931   18179 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:00.179959   18179 kic.go:360] could not find the container old-k8s-version-20211117123155-2067 to remove it. will try anyways
	I1117 12:33:00.180046   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:00.281911   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:00.281954   18179 oci.go:83] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:00.282038   18179 cli_runner.go:115] Run: docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0"
	W1117 12:33:00.383520   18179 cli_runner.go:162] docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:33:00.383548   18179 oci.go:656] error shutdown old-k8s-version-20211117123155-2067: docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:01.385655   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:01.487496   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:01.487559   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:01.487572   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:01.487614   18179 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:02.047313   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:02.153364   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:02.153406   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:02.153423   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:02.153446   18179 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:03.243567   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:03.345513   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:03.345561   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:03.345579   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:03.345603   18179 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:04.656060   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:04.756985   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:04.757022   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:04.757029   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:04.757052   18179 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:06.344369   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:06.446451   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:06.446492   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:06.446506   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:06.446532   18179 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:08.788658   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:08.915677   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:08.915722   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:08.915732   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:08.915760   18179 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:13.430767   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:13.623517   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:13.623562   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:13.623582   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:13.623607   18179 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:16.851574   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:16.951897   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:16.951937   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:16.951946   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:16.951971   18179 oci.go:87] couldn't shut down old-k8s-version-20211117123155-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	 
	I1117 12:33:16.952056   18179 cli_runner.go:115] Run: docker rm -f -v old-k8s-version-20211117123155-2067
	I1117 12:33:17.052849   18179 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067
	W1117 12:33:17.153003   18179 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:17.153107   18179 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:33:17.253145   18179 cli_runner.go:115] Run: docker network rm old-k8s-version-20211117123155-2067
	I1117 12:33:20.705654   18179 cli_runner.go:168] Completed: docker network rm old-k8s-version-20211117123155-2067: (3.452494771s)
	W1117 12:33:20.705918   18179 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:33:20.705925   18179 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:33:21.711413   18179 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:33:21.738909   18179 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:33:21.739141   18179 start.go:160] libmachine.API.Create for "old-k8s-version-20211117123155-2067" (driver="docker")
	I1117 12:33:21.739205   18179 client.go:168] LocalClient.Create starting
	I1117 12:33:21.739445   18179 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:33:21.739530   18179 main.go:130] libmachine: Decoding PEM data...
	I1117 12:33:21.739563   18179 main.go:130] libmachine: Parsing certificate...
	I1117 12:33:21.739707   18179 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:33:21.739782   18179 main.go:130] libmachine: Decoding PEM data...
	I1117 12:33:21.739818   18179 main.go:130] libmachine: Parsing certificate...
	I1117 12:33:21.761261   18179 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:33:21.866252   18179 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:33:21.866349   18179 network_create.go:254] running [docker network inspect old-k8s-version-20211117123155-2067] to gather additional debugging logs...
	I1117 12:33:21.866366   18179 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067
	W1117 12:33:21.965749   18179 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:21.965772   18179 network_create.go:257] error running [docker network inspect old-k8s-version-20211117123155-2067]: docker network inspect old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211117123155-2067
	I1117 12:33:21.965789   18179 network_create.go:259] output of [docker network inspect old-k8s-version-20211117123155-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211117123155-2067
	
	** /stderr **
	I1117 12:33:21.965877   18179 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:33:22.065874   18179 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000136988] misses:0}
	I1117 12:33:22.065911   18179 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:33:22.065926   18179 network_create.go:106] attempt to create docker network old-k8s-version-20211117123155-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:33:22.066013   18179 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067
	I1117 12:33:27.075420   18179 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067: (5.009409922s)
	I1117 12:33:27.075445   18179 network_create.go:90] docker network old-k8s-version-20211117123155-2067 192.168.49.0/24 created
	I1117 12:33:27.075462   18179 kic.go:106] calculated static IP "192.168.49.2" for the "old-k8s-version-20211117123155-2067" container
	I1117 12:33:27.075559   18179 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:33:27.175637   18179 cli_runner.go:115] Run: docker volume create old-k8s-version-20211117123155-2067 --label name.minikube.sigs.k8s.io=old-k8s-version-20211117123155-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:33:27.277930   18179 oci.go:102] Successfully created a docker volume old-k8s-version-20211117123155-2067
	I1117 12:33:27.278039   18179 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20211117123155-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117123155-2067 --entrypoint /usr/bin/test -v old-k8s-version-20211117123155-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:33:27.673036   18179 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211117123155-2067
	E1117 12:33:27.673091   18179 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:33:27.673109   18179 client.go:171] LocalClient.Create took 5.933947064s
	I1117 12:33:27.673109   18179 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:33:27.673134   18179 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:33:27.673231   18179 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117123155-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:33:29.673345   18179 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:33:29.673494   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:33:29.843253   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:29.843333   18179 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:29.997767   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:33:30.143574   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:30.143684   18179 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:30.446965   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:33:30.583687   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:30.583769   18179 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:31.155858   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:33:31.291912   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	W1117 12:33:31.292113   18179 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:33:31.292145   18179 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:31.292163   18179 start.go:129] duration metric: createHost completed in 9.580760426s
	I1117 12:33:31.292263   18179 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:33:31.292426   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:33:31.436059   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:31.436146   18179 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:31.614976   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:33:31.751104   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:31.751193   18179 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:32.086444   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:33:32.211359   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:32.211454   18179 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:32.673131   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:33:32.795081   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	W1117 12:33:32.795164   18179 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:33:32.795182   18179 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:32.795193   18179 fix.go:57] fixHost completed within 33.066894186s
	I1117 12:33:32.795201   18179 start.go:80] releasing machines lock for "old-k8s-version-20211117123155-2067", held for 33.066922807s
	W1117 12:33:32.795216   18179 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:33:32.795333   18179 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:33:32.795341   18179 start.go:547] Will try again in 5 seconds ...
	I1117 12:33:33.890046   18179 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117123155-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.216817287s)
	I1117 12:33:33.890067   18179 kic.go:188] duration metric: took 6.216990 seconds to extract preloaded images to volume
	I1117 12:33:37.805243   18179 start.go:313] acquiring machines lock for old-k8s-version-20211117123155-2067: {Name:mkdcdd296c413d69b1c0c600c9bbeca63dadcf75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:37.805428   18179 start.go:317] acquired machines lock for "old-k8s-version-20211117123155-2067" in 147.493µs
	I1117 12:33:37.805482   18179 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:33:37.805492   18179 fix.go:55] fixHost starting: 
	I1117 12:33:37.805980   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:37.909698   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:37.909749   18179 fix.go:108] recreateIfNeeded on old-k8s-version-20211117123155-2067: state= err=unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:37.909763   18179 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:33:37.936914   18179 out.go:176] * docker "old-k8s-version-20211117123155-2067" container is missing, will recreate.
	I1117 12:33:37.936980   18179 delete.go:124] DEMOLISHING old-k8s-version-20211117123155-2067 ...
	I1117 12:33:37.937217   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:38.040899   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:38.040938   18179 stop.go:75] unable to get state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:38.040954   18179 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:38.041374   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:38.145054   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:38.145097   18179 delete.go:82] Unable to get host status for old-k8s-version-20211117123155-2067, assuming it has already been deleted: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:38.145189   18179 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067
	W1117 12:33:38.247276   18179 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:38.247304   18179 kic.go:360] could not find the container old-k8s-version-20211117123155-2067 to remove it. will try anyways
	I1117 12:33:38.247390   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:38.350197   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:38.350235   18179 oci.go:83] error getting container status, will try to delete anyways: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:38.350322   18179 cli_runner.go:115] Run: docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0"
	W1117 12:33:38.454543   18179 cli_runner.go:162] docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:33:38.454575   18179 oci.go:656] error shutdown old-k8s-version-20211117123155-2067: docker exec --privileged -t old-k8s-version-20211117123155-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:39.463747   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:39.571275   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:39.571312   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:39.571324   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:39.571347   18179 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:39.966377   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:40.070829   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:40.070868   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:40.070881   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:40.070902   18179 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:40.666398   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:40.772357   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:40.772401   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:40.772417   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:40.772438   18179 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:42.106261   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:42.207989   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:42.208035   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:42.208048   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:42.208074   18179 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:43.421331   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:43.527056   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:43.527094   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:43.527116   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:43.527136   18179 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:45.309896   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:45.414090   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:45.414130   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:45.414139   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:45.414161   18179 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:48.684082   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:48.786611   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:48.786655   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:48.786666   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:48.786686   18179 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:54.885011   18179 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:33:54.984034   18179 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:54.984073   18179 oci.go:668] temporary error verifying shutdown: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:33:54.984083   18179 oci.go:670] temporary error: container old-k8s-version-20211117123155-2067 status is  but expect it to be exited
	I1117 12:33:54.984107   18179 oci.go:87] couldn't shut down old-k8s-version-20211117123155-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	 
	I1117 12:33:54.984198   18179 cli_runner.go:115] Run: docker rm -f -v old-k8s-version-20211117123155-2067
	I1117 12:33:55.087358   18179 cli_runner.go:115] Run: docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067
	W1117 12:33:55.189287   18179 cli_runner.go:162] docker container inspect -f {{.Id}} old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:33:55.189446   18179 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:33:55.293124   18179 cli_runner.go:115] Run: docker network rm old-k8s-version-20211117123155-2067
	I1117 12:34:02.917808   18179 cli_runner.go:168] Completed: docker network rm old-k8s-version-20211117123155-2067: (7.624690358s)
	W1117 12:34:02.918079   18179 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:34:02.918086   18179 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:34:03.922119   18179 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:34:03.949125   18179 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:34:03.949354   18179 start.go:160] libmachine.API.Create for "old-k8s-version-20211117123155-2067" (driver="docker")
	I1117 12:34:03.949408   18179 client.go:168] LocalClient.Create starting
	I1117 12:34:03.949603   18179 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:34:03.949741   18179 main.go:130] libmachine: Decoding PEM data...
	I1117 12:34:03.949775   18179 main.go:130] libmachine: Parsing certificate...
	I1117 12:34:03.949918   18179 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:34:03.970876   18179 main.go:130] libmachine: Decoding PEM data...
	I1117 12:34:03.970895   18179 main.go:130] libmachine: Parsing certificate...
	I1117 12:34:03.971670   18179 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:34:04.082562   18179 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:34:04.082662   18179 network_create.go:254] running [docker network inspect old-k8s-version-20211117123155-2067] to gather additional debugging logs...
	I1117 12:34:04.082701   18179 cli_runner.go:115] Run: docker network inspect old-k8s-version-20211117123155-2067
	W1117 12:34:04.185866   18179 cli_runner.go:162] docker network inspect old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:34:04.185890   18179 network_create.go:257] error running [docker network inspect old-k8s-version-20211117123155-2067]: docker network inspect old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211117123155-2067
	I1117 12:34:04.185902   18179 network_create.go:259] output of [docker network inspect old-k8s-version-20211117123155-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211117123155-2067
	
	** /stderr **
	I1117 12:34:04.185989   18179 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:34:04.306509   18179 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000136988] amended:false}} dirty:map[] misses:0}
	I1117 12:34:04.306545   18179 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:34:04.306739   18179 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000136988] amended:true}} dirty:map[192.168.49.0:0xc000136988 192.168.58.0:0xc00072a358] misses:0}
	I1117 12:34:04.306753   18179 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:34:04.306760   18179 network_create.go:106] attempt to create docker network old-k8s-version-20211117123155-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:34:04.306866   18179 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067
	W1117 12:34:04.410315   18179 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067 returned with exit code 1
	W1117 12:34:04.410352   18179 network_create.go:98] failed to create docker network old-k8s-version-20211117123155-2067 192.168.58.0/24, will retry: subnet is taken
	I1117 12:34:04.410597   18179 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000136988] amended:true}} dirty:map[192.168.49.0:0xc000136988 192.168.58.0:0xc00072a358] misses:1}
	I1117 12:34:04.410615   18179 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:34:04.410810   18179 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000136988] amended:true}} dirty:map[192.168.49.0:0xc000136988 192.168.58.0:0xc00072a358 192.168.67.0:0xc0006244b0] misses:1}
	I1117 12:34:04.410820   18179 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:34:04.410839   18179 network_create.go:106] attempt to create docker network old-k8s-version-20211117123155-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:34:04.410936   18179 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067
	I1117 12:34:09.484700   18179 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211117123155-2067: (5.073742407s)
	I1117 12:34:09.484724   18179 network_create.go:90] docker network old-k8s-version-20211117123155-2067 192.168.67.0/24 created
	I1117 12:34:09.484735   18179 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20211117123155-2067" container
	I1117 12:34:09.484840   18179 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:34:09.584965   18179 cli_runner.go:115] Run: docker volume create old-k8s-version-20211117123155-2067 --label name.minikube.sigs.k8s.io=old-k8s-version-20211117123155-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:34:09.686385   18179 oci.go:102] Successfully created a docker volume old-k8s-version-20211117123155-2067
	I1117 12:34:09.686515   18179 cli_runner.go:115] Run: docker run --rm --name old-k8s-version-20211117123155-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211117123155-2067 --entrypoint /usr/bin/test -v old-k8s-version-20211117123155-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:34:10.070257   18179 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211117123155-2067
	E1117 12:34:10.070311   18179 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:34:10.070322   18179 client.go:171] LocalClient.Create took 6.120963084s
	I1117 12:34:10.070322   18179 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 12:34:10.070346   18179 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:34:10.070452   18179 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211117123155-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:34:12.072830   18179 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:34:12.072979   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:34:12.199827   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:34:12.199966   18179 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:34:12.399023   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:34:12.523870   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:34:12.523956   18179 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:34:12.823202   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:34:12.942738   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:34:12.942899   18179 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:34:13.651677   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:34:13.769266   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	W1117 12:34:13.769367   18179 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:34:13.769385   18179 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:34:13.769415   18179 start.go:129] duration metric: createHost completed in 9.847360776s
	I1117 12:34:13.769483   18179 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:34:13.769539   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:34:13.899299   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:34:13.899385   18179 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:34:14.248626   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:34:14.363531   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:34:14.363626   18179 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:34:14.813268   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:34:14.931391   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	I1117 12:34:14.931469   18179 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:34:15.507838   18179 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067
	W1117 12:34:15.611211   18179 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067 returned with exit code 1
	W1117 12:34:15.611306   18179 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:34:15.611335   18179 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-20211117123155-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211117123155-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	I1117 12:34:15.611344   18179 fix.go:57] fixHost completed within 37.806197926s
	I1117 12:34:15.611352   18179 start.go:80] releasing machines lock for "old-k8s-version-20211117123155-2067", held for 37.806249869s
	W1117 12:34:15.611499   18179 out.go:241] * Failed to start docker container. Running "minikube delete -p old-k8s-version-20211117123155-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-20211117123155-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:34:15.684861   18179 out.go:176] 
	W1117 12:34:15.685075   18179 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:34:15.685136   18179 out.go:241] * 
	* 
	W1117 12:34:15.686472   18179 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:34:15.764859   18179 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20211117123155-2067 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d96f08765f8e70f2f3c4766c618ab3e39db3c2f6a4b2c4a84cd64ba4a9599d77",
	        "Created": "2021-11-17T20:34:04.561748173Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (176.768396ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:16.101823   18858 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (77.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211117123224-2067 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context no-preload-20211117123224-2067 create -f testdata/busybox.yaml: exit status 1 (39.716331ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117123224-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context no-preload-20211117123224-2067 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "7e70d2b6de16a9bfbf623591d0a58e26ebaef43f13701a504004c90e58b0cb73",
	        "Created": "2021-11-17T20:33:02.835448625Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (139.573859ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:33:14.172186   18335 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "7e70d2b6de16a9bfbf623591d0a58e26ebaef43f13701a504004c90e58b0cb73",
	        "Created": "2021-11-17T20:33:02.835448625Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (142.667921ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:33:14.425369   18344 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20211117123224-2067 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20211117123224-2067 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context no-preload-20211117123224-2067 describe deploy/metrics-server -n kube-system: exit status 1 (38.39259ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117123224-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20211117123224-2067 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "7e70d2b6de16a9bfbf623591d0a58e26ebaef43f13701a504004c90e58b0cb73",
	        "Created": "2021-11-17T20:33:02.835448625Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (141.273425ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:33:14.906499   18359 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (15.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20211117123224-2067 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p no-preload-20211117123224-2067 --alsologtostderr -v=3: exit status 82 (14.723302381s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-20211117123224-2067"  ...
	* Stopping node "no-preload-20211117123224-2067"  ...
	* Stopping node "no-preload-20211117123224-2067"  ...
	* Stopping node "no-preload-20211117123224-2067"  ...
	* Stopping node "no-preload-20211117123224-2067"  ...
	* Stopping node "no-preload-20211117123224-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:33:14.946296   18364 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:33:14.946488   18364 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:33:14.946493   18364 out.go:310] Setting ErrFile to fd 2...
	I1117 12:33:14.946496   18364 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:33:14.946570   18364 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:33:14.946857   18364 out.go:304] Setting JSON to false
	I1117 12:33:14.947047   18364 mustload.go:65] Loading cluster: no-preload-20211117123224-2067
	I1117 12:33:14.947287   18364 config.go:176] Loaded profile config "no-preload-20211117123224-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:33:14.947333   18364 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/no-preload-20211117123224-2067/config.json ...
	I1117 12:33:14.947681   18364 mustload.go:65] Loading cluster: no-preload-20211117123224-2067
	I1117 12:33:14.947765   18364 config.go:176] Loaded profile config "no-preload-20211117123224-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:33:14.947797   18364 stop.go:39] StopHost: no-preload-20211117123224-2067
	I1117 12:33:14.979813   18364 out.go:176] * Stopping node "no-preload-20211117123224-2067"  ...
	I1117 12:33:14.980058   18364 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:15.081527   18364 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:15.081572   18364 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	W1117 12:33:15.081592   18364 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:15.081614   18364 retry.go:31] will retry after 1.104660288s: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:16.193687   18364 stop.go:39] StopHost: no-preload-20211117123224-2067
	I1117 12:33:16.221343   18364 out.go:176] * Stopping node "no-preload-20211117123224-2067"  ...
	I1117 12:33:16.221625   18364 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:16.325337   18364 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:16.325392   18364 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	W1117 12:33:16.325408   18364 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:16.325430   18364 retry.go:31] will retry after 2.160763633s: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:18.486307   18364 stop.go:39] StopHost: no-preload-20211117123224-2067
	I1117 12:33:18.513659   18364 out.go:176] * Stopping node "no-preload-20211117123224-2067"  ...
	I1117 12:33:18.513788   18364 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:18.615015   18364 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:18.615066   18364 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	W1117 12:33:18.615082   18364 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:18.615108   18364 retry.go:31] will retry after 2.62026012s: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:21.235733   18364 stop.go:39] StopHost: no-preload-20211117123224-2067
	I1117 12:33:21.263456   18364 out.go:176] * Stopping node "no-preload-20211117123224-2067"  ...
	I1117 12:33:21.263735   18364 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:21.368956   18364 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:21.368992   18364 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	W1117 12:33:21.369003   18364 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:21.369022   18364 retry.go:31] will retry after 3.164785382s: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:24.541570   18364 stop.go:39] StopHost: no-preload-20211117123224-2067
	I1117 12:33:24.568753   18364 out.go:176] * Stopping node "no-preload-20211117123224-2067"  ...
	I1117 12:33:24.568897   18364 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:24.667763   18364 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:24.667810   18364 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	W1117 12:33:24.667820   18364 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:24.667843   18364 retry.go:31] will retry after 4.680977329s: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:29.348874   18364 stop.go:39] StopHost: no-preload-20211117123224-2067
	I1117 12:33:29.374951   18364 out.go:176] * Stopping node "no-preload-20211117123224-2067"  ...
	I1117 12:33:29.375087   18364 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:29.498445   18364 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:29.498488   18364 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	W1117 12:33:29.498505   18364 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:29.531780   18364 out.go:176] 
	W1117 12:33:29.531904   18364 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20211117123224-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-20211117123224-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:33:29.531919   18364 out.go:241] * 
	* 
	W1117 12:33:29.534996   18364 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:33:29.610028   18364 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p no-preload-20211117123224-2067 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "7e70d2b6de16a9bfbf623591d0a58e26ebaef43f13701a504004c90e58b0cb73",
	        "Created": "2021-11-17T20:33:02.835448625Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (200.460009ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:33:29.985147   18451 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (15.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (201.395453ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:33:30.187003   18457 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20211117123224-2067 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "7e70d2b6de16a9bfbf623591d0a58e26ebaef43f13701a504004c90e58b0cb73",
	        "Created": "2021-11-17T20:33:02.835448625Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (176.033173ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:33:30.761774   18479 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (76.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20211117123224-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-20211117123224-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 80 (1m16.522284321s)

                                                
                                                
-- stdout --
	* [no-preload-20211117123224-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20211117123224-2067 in cluster no-preload-20211117123224-2067
	* Pulling base image ...
	* docker "no-preload-20211117123224-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-20211117123224-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:33:30.814986   18484 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:33:30.815269   18484 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:33:30.815275   18484 out.go:310] Setting ErrFile to fd 2...
	I1117 12:33:30.815278   18484 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:33:30.815379   18484 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:33:30.815678   18484 out.go:304] Setting JSON to false
	I1117 12:33:30.843773   18484 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3785,"bootTime":1637177425,"procs":323,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:33:30.843904   18484 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:33:30.870502   18484 out.go:176] * [no-preload-20211117123224-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:33:30.870633   18484 notify.go:174] Checking for updates...
	I1117 12:33:30.917229   18484 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:33:30.943275   18484 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:33:30.969190   18484 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:33:30.995264   18484 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:33:30.995680   18484 config.go:176] Loaded profile config "no-preload-20211117123224-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:33:30.996296   18484 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:33:31.112515   18484 docker.go:132] docker version: linux-20.10.5
	I1117 12:33:31.112669   18484 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:33:31.316972   18484 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 20:33:31.244728889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:33:31.364846   18484 out.go:176] * Using the docker driver based on existing profile
	I1117 12:33:31.364886   18484 start.go:280] selected driver: docker
	I1117 12:33:31.364897   18484 start.go:775] validating driver "docker" against &{Name:no-preload-20211117123224-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:no-preload-20211117123224-2067 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/Users:/minikube-host}
	I1117 12:33:31.365009   18484 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:33:31.368798   18484 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:33:31.551371   18484 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 20:33:31.495808914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:33:31.551590   18484 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:33:31.551616   18484 cni.go:93] Creating CNI manager for ""
	I1117 12:33:31.551626   18484 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:33:31.551651   18484 start_flags.go:282] config:
	{Name:no-preload-20211117123224-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:no-preload-20211117123224-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:33:31.598835   18484 out.go:176] * Starting control plane node no-preload-20211117123224-2067 in cluster no-preload-20211117123224-2067
	I1117 12:33:31.598894   18484 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:33:31.624763   18484 out.go:176] * Pulling base image ...
	I1117 12:33:31.624802   18484 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:33:31.624864   18484 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:33:31.624990   18484 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/no-preload-20211117123224-2067/config.json ...
	I1117 12:33:31.625038   18484 cache.go:107] acquiring lock: {Name:mk46c2aac0c807364b7b6718b28e798e38331a44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.625109   18484 cache.go:107] acquiring lock: {Name:mkc38557d3f08ef749cdb79439f2e56bd72f6169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.625038   18484 cache.go:107] acquiring lock: {Name:mk484f4aa10be29d59ecef162cc3ba4ef356bc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.625117   18484 cache.go:107] acquiring lock: {Name:mk1cf5798a7a6d25ea3a3811b697e938466510b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.626362   18484 cache.go:107] acquiring lock: {Name:mkdf67b1af8680e831a8cb6a6b59deeb701a2c60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.626474   18484 cache.go:107] acquiring lock: {Name:mkfdfbbae55ac5b96e9234058d2251140315481d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.626368   18484 cache.go:107] acquiring lock: {Name:mke1ba390537bac8e8cb13b8ad3c21b706e43051 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.627307   18484 cache.go:107] acquiring lock: {Name:mk8510e8d29ffb1d7afc63ac2448ba0a514946b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.627335   18484 cache.go:107] acquiring lock: {Name:mk51daa56a24576eb68d57c222971a7123f25c24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.627461   18484 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I1117 12:33:31.627489   18484 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.5 exists
	I1117 12:33:31.627478   18484 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 exists
	I1117 12:33:31.627489   18484 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 2.453553ms
	I1117 12:33:31.627513   18484 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I1117 12:33:31.627505   18484 cache.go:96] cache image "k8s.gcr.io/pause:3.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.5" took 1.025534ms
	I1117 12:33:31.627544   18484 cache.go:80] save to tar file k8s.gcr.io/pause:3.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.5 succeeded
	I1117 12:33:31.627492   18484 cache.go:107] acquiring lock: {Name:mk45dbae0c82aa7e4329337c39882df418aeab32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.627514   18484 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4" took 2.397325ms
	I1117 12:33:31.627589   18484 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 succeeded
	I1117 12:33:31.627545   18484 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1117 12:33:31.627612   18484 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I1117 12:33:31.627615   18484 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.508683ms
	I1117 12:33:31.627624   18484 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1117 12:33:31.627624   18484 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 1.249085ms
	I1117 12:33:31.627633   18484 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I1117 12:33:31.627635   18484 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.22.4-rc.0
	I1117 12:33:31.627636   18484 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0
	I1117 12:33:31.627663   18484 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 exists
	I1117 12:33:31.627636   18484 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.22.4-rc.0
	I1117 12:33:31.627635   18484 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.22.4-rc.0
	I1117 12:33:31.627689   18484 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0" took 293.092µs
	I1117 12:33:31.627715   18484 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 succeeded
	I1117 12:33:31.647996   18484 image.go:176] found k8s.gcr.io/kube-proxy:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-proxy:v1.22.4-rc.0} opener:0xc000bea070 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:33:31.648031   18484 image.go:176] found k8s.gcr.io/kube-scheduler:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-scheduler:v1.22.4-rc.0} opener:0xc0000e6230 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:33:31.648043   18484 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.4-rc.0
	I1117 12:33:31.648050   18484 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.4-rc.0
	I1117 12:33:31.648665   18484 image.go:176] found k8s.gcr.io/kube-apiserver:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-apiserver:v1.22.4-rc.0} opener:0xc000c00000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:33:31.648693   18484 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.4-rc.0
	I1117 12:33:31.648816   18484 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.22.4-rc.0 original:k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0} opener:0xc000bea0e0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 12:33:31.648832   18484 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.4-rc.0
	I1117 12:33:31.651578   18484 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.22.4-rc.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.4-rc.0" took 26.509849ms
	I1117 12:33:31.651873   18484 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.22.4-rc.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.4-rc.0" took 26.093689ms
	I1117 12:33:31.651900   18484 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.22.4-rc.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.4-rc.0" took 26.884853ms
	I1117 12:33:31.652125   18484 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.4-rc.0" took 27.048485ms
	I1117 12:33:31.775185   18484 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:33:31.775203   18484 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:33:31.775215   18484 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:33:31.775250   18484 start.go:313] acquiring machines lock for no-preload-20211117123224-2067: {Name:mk30ecdb69a16cf786227c9355857466145cadb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:33:31.775332   18484 start.go:317] acquired machines lock for "no-preload-20211117123224-2067" in 69.716µs
	I1117 12:33:31.775355   18484 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:33:31.775364   18484 fix.go:55] fixHost starting: 
	I1117 12:33:31.775597   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:31.905185   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:31.905301   18484 fix.go:108] recreateIfNeeded on no-preload-20211117123224-2067: state= err=unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:31.905343   18484 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:33:31.953777   18484 out.go:176] * docker "no-preload-20211117123224-2067" container is missing, will recreate.
	I1117 12:33:31.953816   18484 delete.go:124] DEMOLISHING no-preload-20211117123224-2067 ...
	I1117 12:33:31.954021   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:32.074591   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:32.074649   18484 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:32.074670   18484 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:32.075162   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:32.199555   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:32.199603   18484 delete.go:82] Unable to get host status for no-preload-20211117123224-2067, assuming it has already been deleted: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:32.199716   18484 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117123224-2067
	W1117 12:33:32.315448   18484 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:32.315484   18484 kic.go:360] could not find the container no-preload-20211117123224-2067 to remove it. will try anyways
	I1117 12:33:32.315623   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:32.434680   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:33:32.434723   18484 oci.go:83] error getting container status, will try to delete anyways: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:32.434819   18484 cli_runner.go:115] Run: docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0"
	W1117 12:33:32.553804   18484 cli_runner.go:162] docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:33:32.553833   18484 oci.go:656] error shutdown no-preload-20211117123224-2067: docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:33.555509   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:33.665646   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:33.665690   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:33.665706   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:33:33.665745   18484 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:34.218354   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:34.331178   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:34.331219   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:34.331228   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:33:34.331252   18484 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:35.415708   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:35.518396   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:35.518439   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:35.518449   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:33:35.518470   18484 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:36.834098   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:36.934537   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:36.934591   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:36.934603   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:33:36.934624   18484 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:38.518876   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:38.631533   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:38.631575   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:38.631584   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:33:38.631608   18484 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:40.982487   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:41.086837   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:41.086885   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:41.086912   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:33:41.086937   18484 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:45.603233   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:45.704712   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:45.704752   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:45.704762   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:33:45.704783   18484 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:48.934092   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:33:49.039607   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:33:49.039644   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:33:49.039653   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:33:49.039679   18484 oci.go:87] couldn't shut down no-preload-20211117123224-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	 
	I1117 12:33:49.039762   18484 cli_runner.go:115] Run: docker rm -f -v no-preload-20211117123224-2067
	I1117 12:33:49.141011   18484 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117123224-2067
	W1117 12:33:49.241081   18484 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:49.241200   18484 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:33:49.352162   18484 cli_runner.go:115] Run: docker network rm no-preload-20211117123224-2067
	I1117 12:33:52.830426   18484 cli_runner.go:168] Completed: docker network rm no-preload-20211117123224-2067: (3.478253677s)
	W1117 12:33:52.831143   18484 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:33:52.831151   18484 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:33:53.835153   18484 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:33:53.861669   18484 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:33:53.861853   18484 start.go:160] libmachine.API.Create for "no-preload-20211117123224-2067" (driver="docker")
	I1117 12:33:53.861906   18484 client.go:168] LocalClient.Create starting
	I1117 12:33:53.862109   18484 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:33:53.862205   18484 main.go:130] libmachine: Decoding PEM data...
	I1117 12:33:53.862253   18484 main.go:130] libmachine: Parsing certificate...
	I1117 12:33:53.862365   18484 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:33:53.883071   18484 main.go:130] libmachine: Decoding PEM data...
	I1117 12:33:53.883113   18484 main.go:130] libmachine: Parsing certificate...
	I1117 12:33:53.884486   18484 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:33:53.988485   18484 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:33:53.988607   18484 network_create.go:254] running [docker network inspect no-preload-20211117123224-2067] to gather additional debugging logs...
	I1117 12:33:53.988629   18484 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067
	W1117 12:33:54.088704   18484 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:33:54.088729   18484 network_create.go:257] error running [docker network inspect no-preload-20211117123224-2067]: docker network inspect no-preload-20211117123224-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20211117123224-2067
	I1117 12:33:54.088743   18484 network_create.go:259] output of [docker network inspect no-preload-20211117123224-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20211117123224-2067
	
	** /stderr **
	I1117 12:33:54.088835   18484 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:33:54.189539   18484 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000186db8] misses:0}
	I1117 12:33:54.189578   18484 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:33:54.189597   18484 network_create.go:106] attempt to create docker network no-preload-20211117123224-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:33:54.189677   18484 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067
	W1117 12:33:54.306801   18484 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:33:54.306845   18484 network_create.go:98] failed to create docker network no-preload-20211117123224-2067 192.168.49.0/24, will retry: subnet is taken
	I1117 12:33:54.307077   18484 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186db8] amended:false}} dirty:map[] misses:0}
	I1117 12:33:54.307095   18484 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:33:54.307269   18484 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186db8] amended:true}} dirty:map[192.168.49.0:0xc000186db8 192.168.58.0:0xc00053cbc0] misses:0}
	I1117 12:33:54.307286   18484 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:33:54.307293   18484 network_create.go:106] attempt to create docker network no-preload-20211117123224-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:33:54.307367   18484 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067
	I1117 12:33:59.651786   18484 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067: (5.344413264s)
	I1117 12:33:59.651826   18484 network_create.go:90] docker network no-preload-20211117123224-2067 192.168.58.0/24 created
	I1117 12:33:59.651867   18484 kic.go:106] calculated static IP "192.168.58.2" for the "no-preload-20211117123224-2067" container
	I1117 12:33:59.652002   18484 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:33:59.751761   18484 cli_runner.go:115] Run: docker volume create no-preload-20211117123224-2067 --label name.minikube.sigs.k8s.io=no-preload-20211117123224-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:33:59.851296   18484 oci.go:102] Successfully created a docker volume no-preload-20211117123224-2067
	I1117 12:33:59.851423   18484 cli_runner.go:115] Run: docker run --rm --name no-preload-20211117123224-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117123224-2067 --entrypoint /usr/bin/test -v no-preload-20211117123224-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:34:00.250600   18484 oci.go:106] Successfully prepared a docker volume no-preload-20211117123224-2067
	E1117 12:34:00.250648   18484 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:34:00.250658   18484 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:34:00.250675   18484 client.go:171] LocalClient.Create took 6.388819162s
	I1117 12:34:02.251466   18484 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:34:02.251542   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:02.352179   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:02.352265   18484 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:02.502630   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:02.602632   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:02.602730   18484 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:02.903234   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:03.005214   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:03.005310   18484 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:03.583019   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:03.686644   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:34:03.686733   18484 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:34:03.686776   18484 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:03.686790   18484 start.go:129] duration metric: createHost completed in 9.851684256s
	I1117 12:34:03.686843   18484 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:34:03.686903   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:03.787120   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:03.787217   18484 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:03.966384   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:04.082599   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:04.082682   18484 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:04.422743   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:04.599277   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:04.599366   18484 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:05.061032   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:05.162768   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:34:05.162859   18484 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:34:05.162883   18484 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:05.162896   18484 fix.go:57] fixHost completed within 33.387838062s
	I1117 12:34:05.162904   18484 start.go:80] releasing machines lock for "no-preload-20211117123224-2067", held for 33.387869112s
	W1117 12:34:05.162920   18484 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:34:05.163037   18484 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:34:05.163043   18484 start.go:547] Will try again in 5 seconds ...
	I1117 12:34:10.164699   18484 start.go:313] acquiring machines lock for no-preload-20211117123224-2067: {Name:mk30ecdb69a16cf786227c9355857466145cadb3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:34:10.164809   18484 start.go:317] acquired machines lock for "no-preload-20211117123224-2067" in 78.286µs
	I1117 12:34:10.164832   18484 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:34:10.164837   18484 fix.go:55] fixHost starting: 
	I1117 12:34:10.165095   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:10.278160   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:10.278214   18484 fix.go:108] recreateIfNeeded on no-preload-20211117123224-2067: state= err=unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:10.278223   18484 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:34:10.322804   18484 out.go:176] * docker "no-preload-20211117123224-2067" container is missing, will recreate.
	I1117 12:34:10.322823   18484 delete.go:124] DEMOLISHING no-preload-20211117123224-2067 ...
	I1117 12:34:10.322964   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:10.442044   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:34:10.442092   18484 stop.go:75] unable to get state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:10.442107   18484 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:10.442750   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:10.566283   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:10.566360   18484 delete.go:82] Unable to get host status for no-preload-20211117123224-2067, assuming it has already been deleted: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:10.566477   18484 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117123224-2067
	W1117 12:34:10.693325   18484 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:10.693354   18484 kic.go:360] could not find the container no-preload-20211117123224-2067 to remove it. will try anyways
	I1117 12:34:10.693465   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:10.839084   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:34:10.839128   18484 oci.go:83] error getting container status, will try to delete anyways: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:10.839242   18484 cli_runner.go:115] Run: docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0"
	W1117 12:34:10.963195   18484 cli_runner.go:162] docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:34:10.963222   18484 oci.go:656] error shutdown no-preload-20211117123224-2067: docker exec --privileged -t no-preload-20211117123224-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:11.963413   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:12.096768   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:12.096830   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:12.096841   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:34:12.096863   18484 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:12.496132   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:12.618986   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:12.619036   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:12.619046   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:34:12.619076   18484 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:13.214825   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:13.337261   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:13.337305   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:13.337318   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:34:13.337342   18484 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:14.664007   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:14.780061   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:14.780109   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:14.780127   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:34:14.780155   18484 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:16.000727   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:16.120664   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:16.120707   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:16.120716   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:34:16.120739   18484 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:17.910196   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:18.036769   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:18.036833   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:18.036840   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:34:18.036864   18484 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:21.314158   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:21.417304   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:21.417344   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:21.417355   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:34:21.417390   18484 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:27.522624   18484 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:27.633333   18484 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:27.633376   18484 oci.go:668] temporary error verifying shutdown: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:27.633383   18484 oci.go:670] temporary error: container no-preload-20211117123224-2067 status is  but expect it to be exited
	I1117 12:34:27.633409   18484 oci.go:87] couldn't shut down no-preload-20211117123224-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	 
	I1117 12:34:27.633487   18484 cli_runner.go:115] Run: docker rm -f -v no-preload-20211117123224-2067
	I1117 12:34:27.852245   18484 cli_runner.go:115] Run: docker container inspect -f {{.Id}} no-preload-20211117123224-2067
	W1117 12:34:28.006832   18484 cli_runner.go:162] docker container inspect -f {{.Id}} no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:28.006935   18484 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:34:28.117620   18484 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:34:28.117733   18484 network_create.go:254] running [docker network inspect no-preload-20211117123224-2067] to gather additional debugging logs...
	I1117 12:34:28.117750   18484 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067
	W1117 12:34:28.229640   18484 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:28.229692   18484 network_create.go:257] error running [docker network inspect no-preload-20211117123224-2067]: docker network inspect no-preload-20211117123224-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20211117123224-2067
	I1117 12:34:28.229708   18484 network_create.go:259] output of [docker network inspect no-preload-20211117123224-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20211117123224-2067
	
	** /stderr **
	W1117 12:34:28.230601   18484 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:34:28.230608   18484 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:34:29.233049   18484 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:34:29.260262   18484 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:34:29.260454   18484 start.go:160] libmachine.API.Create for "no-preload-20211117123224-2067" (driver="docker")
	I1117 12:34:29.260488   18484 client.go:168] LocalClient.Create starting
	I1117 12:34:29.260666   18484 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:34:29.260761   18484 main.go:130] libmachine: Decoding PEM data...
	I1117 12:34:29.260790   18484 main.go:130] libmachine: Parsing certificate...
	I1117 12:34:29.260970   18484 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:34:29.281616   18484 main.go:130] libmachine: Decoding PEM data...
	I1117 12:34:29.281687   18484 main.go:130] libmachine: Parsing certificate...
	I1117 12:34:29.304941   18484 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:34:29.408084   18484 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:34:29.408196   18484 network_create.go:254] running [docker network inspect no-preload-20211117123224-2067] to gather additional debugging logs...
	I1117 12:34:29.408214   18484 cli_runner.go:115] Run: docker network inspect no-preload-20211117123224-2067
	W1117 12:34:29.511827   18484 cli_runner.go:162] docker network inspect no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:29.511851   18484 network_create.go:257] error running [docker network inspect no-preload-20211117123224-2067]: docker network inspect no-preload-20211117123224-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20211117123224-2067
	I1117 12:34:29.511862   18484 network_create.go:259] output of [docker network inspect no-preload-20211117123224-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20211117123224-2067
	
	** /stderr **
	I1117 12:34:29.511955   18484 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:34:29.617744   18484 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186db8] amended:true}} dirty:map[192.168.49.0:0xc000186db8 192.168.58.0:0xc00053cbc0] misses:0}
	I1117 12:34:29.617778   18484 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:34:29.617945   18484 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186db8] amended:true}} dirty:map[192.168.49.0:0xc000186db8 192.168.58.0:0xc00053cbc0] misses:1}
	I1117 12:34:29.617954   18484 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:34:29.618113   18484 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186db8] amended:true}} dirty:map[192.168.49.0:0xc000186db8 192.168.58.0:0xc00053cbc0 192.168.67.0:0xc0006f0370] misses:1}
	I1117 12:34:29.618129   18484 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:34:29.618154   18484 network_create.go:106] attempt to create docker network no-preload-20211117123224-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:34:29.618233   18484 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067
	I1117 12:34:40.981068   18484 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true no-preload-20211117123224-2067: (11.362888809s)
	I1117 12:34:40.981092   18484 network_create.go:90] docker network no-preload-20211117123224-2067 192.168.67.0/24 created
	I1117 12:34:40.981103   18484 kic.go:106] calculated static IP "192.168.67.2" for the "no-preload-20211117123224-2067" container
	I1117 12:34:40.982379   18484 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:34:41.086083   18484 cli_runner.go:115] Run: docker volume create no-preload-20211117123224-2067 --label name.minikube.sigs.k8s.io=no-preload-20211117123224-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:34:41.189073   18484 oci.go:102] Successfully created a docker volume no-preload-20211117123224-2067
	I1117 12:34:41.189206   18484 cli_runner.go:115] Run: docker run --rm --name no-preload-20211117123224-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20211117123224-2067 --entrypoint /usr/bin/test -v no-preload-20211117123224-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:34:41.598484   18484 oci.go:106] Successfully prepared a docker volume no-preload-20211117123224-2067
	E1117 12:34:41.598540   18484 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:34:41.598550   18484 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:34:41.598562   18484 client.go:171] LocalClient.Create took 12.3381783s
	I1117 12:34:43.608070   18484 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:34:43.608223   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:43.713028   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:43.713114   18484 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:43.917290   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:44.035005   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:44.035139   18484 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:44.334212   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:44.447100   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:44.447182   18484 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:45.158131   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:45.265418   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:34:45.265511   18484 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:34:45.265526   18484 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:45.265536   18484 start.go:129] duration metric: createHost completed in 16.032608694s
	I1117 12:34:45.265596   18484 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:34:45.265653   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:45.368368   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:45.368481   18484 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:45.716608   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:45.832364   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:45.832475   18484 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:46.283091   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:46.387866   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	I1117 12:34:46.387991   18484 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:46.966586   18484 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067
	W1117 12:34:47.102970   18484 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067 returned with exit code 1
	W1117 12:34:47.103061   18484 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:34:47.103077   18484 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-20211117123224-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211117123224-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	I1117 12:34:47.103087   18484 fix.go:57] fixHost completed within 36.938586586s
	I1117 12:34:47.103101   18484 start.go:80] releasing machines lock for "no-preload-20211117123224-2067", held for 36.938620328s
	W1117 12:34:47.103258   18484 out.go:241] * Failed to start docker container. Running "minikube delete -p no-preload-20211117123224-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p no-preload-20211117123224-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:34:47.183969   18484 out.go:176] 
	W1117 12:34:47.184181   18484 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:34:47.184206   18484 out.go:241] * 
	* 
	W1117 12:34:47.185593   18484 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:34:47.263852   18484 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p no-preload-20211117123224-2067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "e0e6f7e28a87dee61f6590decac6a746393a7acc2e112cf0306796c62f45fcf4",
	        "Created": "2021-11-17T20:34:29.726471754Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (147.67084ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:47.554300   19202 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (76.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20211117123155-2067" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d96f08765f8e70f2f3c4766c618ab3e39db3c2f6a4b2c4a84cd64ba4a9599d77",
	        "Created": "2021-11-17T20:34:04.561748173Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (144.545363ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:16.357226   18871 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20211117123155-2067" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211117123155-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117123155-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (39.878714ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117123155-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20211117123155-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d96f08765f8e70f2f3c4766c618ab3e39db3c2f6a4b2c4a84cd64ba4a9599d77",
	        "Created": "2021-11-17T20:34:04.561748173Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (146.354141ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:16.651728   18881 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117123155-2067 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117123155-2067 "sudo crictl images -o json": exit status 80 (211.196681ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117123155-2067 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.14.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.3.1",
- 	"k8s.gcr.io/etcd:3.3.10",
- 	"k8s.gcr.io/kube-apiserver:v1.14.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.14.0",
- 	"k8s.gcr.io/kube-proxy:v1.14.0",
- 	"k8s.gcr.io/kube-scheduler:v1.14.0",
- 	"k8s.gcr.io/pause:3.1",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d96f08765f8e70f2f3c4766c618ab3e39db3c2f6a4b2c4a84cd64ba4a9599d77",
	        "Created": "2021-11-17T20:34:04.561748173Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (144.464436ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:17.113795   18895 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20211117123155-2067 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p old-k8s-version-20211117123155-2067 --alsologtostderr -v=1: exit status 80 (203.026772ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:34:17.154283   18900 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:34:17.155033   18900 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:34:17.155038   18900 out.go:310] Setting ErrFile to fd 2...
	I1117 12:34:17.155041   18900 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:34:17.155116   18900 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:34:17.155275   18900 out.go:304] Setting JSON to false
	I1117 12:34:17.155291   18900 mustload.go:65] Loading cluster: old-k8s-version-20211117123155-2067
	I1117 12:34:17.155519   18900 config.go:176] Loaded profile config "old-k8s-version-20211117123155-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	I1117 12:34:17.155872   18900 cli_runner.go:115] Run: docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}
	W1117 12:34:17.263169   18900 cli_runner.go:162] docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:17.290488   18900 out.go:176] 
	W1117 12:34:17.290701   18900 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067
	
	W1117 12:34:17.290725   18900 out.go:241] * 
	* 
	W1117 12:34:17.295222   18900 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:34:17.316169   18900 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p old-k8s-version-20211117123155-2067 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d96f08765f8e70f2f3c4766c618ab3e39db3c2f6a4b2c4a84cd64ba4a9599d77",
	        "Created": "2021-11-17T20:34:04.561748173Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (144.372902ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:17.566837   18909 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117123155-2067
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20211117123155-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-20211117123155-2067",
	        "Id": "d96f08765f8e70f2f3c4766c618ab3e39db3c2f6a4b2c4a84cd64ba4a9599d77",
	        "Created": "2021-11-17T20:34:04.561748173Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117123155-2067 -n old-k8s-version-20211117123155-2067: exit status 7 (146.542261ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:17.820169   18918 status.go:247] status error: host: state: unknown state "old-k8s-version-20211117123155-2067": docker container inspect old-k8s-version-20211117123155-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-20211117123155-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117123155-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (53.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117123427-2067 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117123427-2067 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3: exit status 80 (52.940983678s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211117123427-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node default-k8s-different-port-20211117123427-2067 in cluster default-k8s-different-port-20211117123427-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20211117123427-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:34:27.042619   19001 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:34:27.042773   19001 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:34:27.042778   19001 out.go:310] Setting ErrFile to fd 2...
	I1117 12:34:27.042782   19001 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:34:27.042861   19001 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:34:27.043172   19001 out.go:304] Setting JSON to false
	I1117 12:34:27.070008   19001 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3842,"bootTime":1637177425,"procs":324,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:34:27.070101   19001 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:34:27.097168   19001 out.go:176] * [default-k8s-different-port-20211117123427-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:34:27.097395   19001 notify.go:174] Checking for updates...
	I1117 12:34:27.144734   19001 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:34:27.170620   19001 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:34:27.196616   19001 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:34:27.222408   19001 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:34:27.222879   19001 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:34:27.222965   19001 config.go:176] Loaded profile config "no-preload-20211117123224-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:34:27.222998   19001 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:34:27.312674   19001 docker.go:132] docker version: linux-20.10.5
	I1117 12:34:27.312817   19001 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:34:27.466825   19001 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:34:27.418128805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:34:27.492753   19001 out.go:176] * Using the docker driver based on user configuration
	I1117 12:34:27.492787   19001 start.go:280] selected driver: docker
	I1117 12:34:27.492793   19001 start.go:775] validating driver "docker" against <nil>
	I1117 12:34:27.492804   19001 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:34:27.495534   19001 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:34:27.664441   19001 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:34:27.610738268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:34:27.664572   19001 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:34:27.664710   19001 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:34:27.664727   19001 cni.go:93] Creating CNI manager for ""
	I1117 12:34:27.664749   19001 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:34:27.664756   19001 start_flags.go:282] config:
	{Name:default-k8s-different-port-20211117123427-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211117123427-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:34:27.745101   19001 out.go:176] * Starting control plane node default-k8s-different-port-20211117123427-2067 in cluster default-k8s-different-port-20211117123427-2067
	I1117 12:34:27.745200   19001 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:34:27.771344   19001 out.go:176] * Pulling base image ...
	I1117 12:34:27.771443   19001 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:34:27.771494   19001 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:34:27.771522   19001 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:34:27.771563   19001 cache.go:57] Caching tarball of preloaded images
	I1117 12:34:27.771775   19001 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:34:27.771797   19001 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:34:27.772881   19001 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/default-k8s-different-port-20211117123427-2067/config.json ...
	I1117 12:34:27.773010   19001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/default-k8s-different-port-20211117123427-2067/config.json: {Name:mk2feea9954f51f7b4bec3d980a0a5b446df8695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:34:27.896888   19001 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:34:27.896907   19001 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:34:27.896939   19001 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:34:27.896978   19001 start.go:313] acquiring machines lock for default-k8s-different-port-20211117123427-2067: {Name:mk77409e95c4c1e3bbfbfb2785de5cabcca9e8cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:34:27.897163   19001 start.go:317] acquired machines lock for "default-k8s-different-port-20211117123427-2067" in 173.636µs
	I1117 12:34:27.897197   19001 start.go:89] Provisioning new machine with config: &{Name:default-k8s-different-port-20211117123427-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211117123427-2067 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:34:27.897277   19001 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:34:27.924436   19001 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:34:27.924942   19001 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211117123427-2067" (driver="docker")
	I1117 12:34:27.925020   19001 client.go:168] LocalClient.Create starting
	I1117 12:34:27.925281   19001 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:34:27.946046   19001 main.go:130] libmachine: Decoding PEM data...
	I1117 12:34:27.946126   19001 main.go:130] libmachine: Parsing certificate...
	I1117 12:34:27.946205   19001 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:34:27.946261   19001 main.go:130] libmachine: Decoding PEM data...
	I1117 12:34:27.946274   19001 main.go:130] libmachine: Parsing certificate...
	I1117 12:34:27.946920   19001 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:34:28.060366   19001 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:34:28.060520   19001 network_create.go:254] running [docker network inspect default-k8s-different-port-20211117123427-2067] to gather additional debugging logs...
	I1117 12:34:28.060562   19001 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067
	W1117 12:34:28.169633   19001 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:34:28.169667   19001 network_create.go:257] error running [docker network inspect default-k8s-different-port-20211117123427-2067]: docker network inspect default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20211117123427-2067
	I1117 12:34:28.169711   19001 network_create.go:259] output of [docker network inspect default-k8s-different-port-20211117123427-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20211117123427-2067
	
	** /stderr **
	I1117 12:34:28.169832   19001 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:34:28.276985   19001 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001805c0] misses:0}
	I1117 12:34:28.277022   19001 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:34:28.277039   19001 network_create.go:106] attempt to create docker network default-k8s-different-port-20211117123427-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:34:28.277124   19001 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067
	I1117 12:34:33.991094   19001 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067: (5.713978878s)
	I1117 12:34:33.991118   19001 network_create.go:90] docker network default-k8s-different-port-20211117123427-2067 192.168.49.0/24 created
	I1117 12:34:33.991138   19001 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20211117123427-2067" container
	I1117 12:34:33.991244   19001 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:34:34.091325   19001 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20211117123427-2067 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117123427-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:34:34.194649   19001 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211117123427-2067
	I1117 12:34:34.194810   19001 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20211117123427-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117123427-2067 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117123427-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:34:34.680524   19001 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211117123427-2067
	E1117 12:34:34.680579   19001 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:34:34.680592   19001 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:34:34.680604   19001 client.go:171] LocalClient.Create took 6.755636737s
	I1117 12:34:34.680620   19001 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:34:34.680713   19001 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117123427-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:34:36.683026   19001 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:34:36.683133   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:34:36.817594   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:34:36.817738   19001 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:37.101251   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:34:37.219884   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:34:37.219992   19001 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:37.760651   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:34:37.877593   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:34:37.877672   19001 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:38.533033   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:34:38.654240   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	W1117 12:34:38.654326   19001 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:34:38.654361   19001 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:38.654375   19001 start.go:129] duration metric: createHost completed in 10.757190268s
	I1117 12:34:38.654389   19001 start.go:80] releasing machines lock for "default-k8s-different-port-20211117123427-2067", held for 10.757316418s
	W1117 12:34:38.654409   19001 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:34:38.654905   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:38.790488   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:38.790537   19001 delete.go:82] Unable to get host status for default-k8s-different-port-20211117123427-2067, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	W1117 12:34:38.790670   19001 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:34:38.790681   19001 start.go:547] Will try again in 5 seconds ...
	I1117 12:34:40.555330   19001 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117123427-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.874647891s)
	I1117 12:34:40.555347   19001 kic.go:188] duration metric: took 5.874781 seconds to extract preloaded images to volume
	I1117 12:34:43.800886   19001 start.go:313] acquiring machines lock for default-k8s-different-port-20211117123427-2067: {Name:mk77409e95c4c1e3bbfbfb2785de5cabcca9e8cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:34:43.801048   19001 start.go:317] acquired machines lock for "default-k8s-different-port-20211117123427-2067" in 126.665µs
	I1117 12:34:43.801091   19001 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:34:43.801105   19001 fix.go:55] fixHost starting: 
	I1117 12:34:43.801572   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:43.904733   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:43.904780   19001 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211117123427-2067: state= err=unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:43.904796   19001 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:34:43.931567   19001 out.go:176] * docker "default-k8s-different-port-20211117123427-2067" container is missing, will recreate.
	I1117 12:34:43.931583   19001 delete.go:124] DEMOLISHING default-k8s-different-port-20211117123427-2067 ...
	I1117 12:34:43.931692   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:44.040940   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:34:44.040990   19001 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:44.041004   19001 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:44.041418   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:44.144575   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:44.144620   19001 delete.go:82] Unable to get host status for default-k8s-different-port-20211117123427-2067, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:44.144711   19001 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067
	W1117 12:34:44.247932   19001 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:34:44.247957   19001 kic.go:360] could not find the container default-k8s-different-port-20211117123427-2067 to remove it. will try anyways
	I1117 12:34:44.248043   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:44.354497   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:34:44.354538   19001 oci.go:83] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:44.354649   19001 cli_runner.go:115] Run: docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0"
	W1117 12:34:44.464912   19001 cli_runner.go:162] docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:34:44.464936   19001 oci.go:656] error shutdown default-k8s-different-port-20211117123427-2067: docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:45.467761   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:45.569966   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:45.570017   19001 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:45.570028   19001 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:34:45.570056   19001 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:46.033052   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:46.142841   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:46.142882   19001 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:46.142891   19001 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:34:46.142914   19001 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:47.041752   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:47.293393   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:47.293442   19001 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:47.293450   19001 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:34:47.293472   19001 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:47.932818   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:48.046321   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:48.046372   19001 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:48.046383   19001 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:34:48.046412   19001 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:49.158035   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:49.361954   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:49.361996   19001 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:49.362003   19001 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:34:49.362039   19001 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:50.882802   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:50.996185   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:50.996254   19001 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:50.996265   19001 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:34:50.996288   19001 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:54.038224   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:34:54.140407   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:54.140446   19001 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:54.140455   19001 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:34:54.140477   19001 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:34:59.924667   19001 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:00.039785   19001 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:00.039826   19001 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:00.039836   19001 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:00.039860   19001 oci.go:87] couldn't shut down default-k8s-different-port-20211117123427-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	 
	I1117 12:35:00.039941   19001 cli_runner.go:115] Run: docker rm -f -v default-k8s-different-port-20211117123427-2067
	I1117 12:35:00.173755   19001 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067
	W1117 12:35:00.288910   19001 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:00.289040   19001 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:35:00.487033   19001 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:35:00.487157   19001 network_create.go:254] running [docker network inspect default-k8s-different-port-20211117123427-2067] to gather additional debugging logs...
	I1117 12:35:00.487174   19001 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067
	W1117 12:35:00.639505   19001 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:00.639531   19001 network_create.go:257] error running [docker network inspect default-k8s-different-port-20211117123427-2067]: docker network inspect default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20211117123427-2067
	I1117 12:35:00.639544   19001 network_create.go:259] output of [docker network inspect default-k8s-different-port-20211117123427-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20211117123427-2067
	
	** /stderr **
	W1117 12:35:00.639779   19001 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:35:00.639785   19001 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:35:01.639824   19001 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:35:01.666677   19001 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:35:01.666751   19001 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211117123427-2067" (driver="docker")
	I1117 12:35:01.666770   19001 client.go:168] LocalClient.Create starting
	I1117 12:35:01.666870   19001 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:35:01.666932   19001 main.go:130] libmachine: Decoding PEM data...
	I1117 12:35:01.666945   19001 main.go:130] libmachine: Parsing certificate...
	I1117 12:35:01.666997   19001 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:35:01.667032   19001 main.go:130] libmachine: Decoding PEM data...
	I1117 12:35:01.667041   19001 main.go:130] libmachine: Parsing certificate...
	I1117 12:35:01.687886   19001 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:35:01.788387   19001 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:35:01.788486   19001 network_create.go:254] running [docker network inspect default-k8s-different-port-20211117123427-2067] to gather additional debugging logs...
	I1117 12:35:01.788503   19001 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067
	W1117 12:35:01.891602   19001 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:01.891629   19001 network_create.go:257] error running [docker network inspect default-k8s-different-port-20211117123427-2067]: docker network inspect default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20211117123427-2067
	I1117 12:35:01.891650   19001 network_create.go:259] output of [docker network inspect default-k8s-different-port-20211117123427-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20211117123427-2067
	
	** /stderr **
	I1117 12:35:01.891757   19001 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:35:01.995318   19001 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001805c0] amended:false}} dirty:map[] misses:0}
	I1117 12:35:01.995346   19001 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:35:01.995515   19001 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001805c0] amended:true}} dirty:map[192.168.49.0:0xc0001805c0 192.168.58.0:0xc000c201d0] misses:0}
	I1117 12:35:01.995529   19001 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:35:01.995538   19001 network_create.go:106] attempt to create docker network default-k8s-different-port-20211117123427-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:35:01.995627   19001 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067
	I1117 12:35:13.805099   19001 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067: (11.809543559s)
	I1117 12:35:13.805125   19001 network_create.go:90] docker network default-k8s-different-port-20211117123427-2067 192.168.58.0/24 created
	I1117 12:35:13.805141   19001 kic.go:106] calculated static IP "192.168.58.2" for the "default-k8s-different-port-20211117123427-2067" container
	I1117 12:35:13.806682   19001 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:35:13.911712   19001 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20211117123427-2067 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117123427-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:35:14.014510   19001 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211117123427-2067
	I1117 12:35:14.014635   19001 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20211117123427-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117123427-2067 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117123427-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:35:14.415381   19001 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211117123427-2067
	E1117 12:35:14.415455   19001 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:35:14.415466   19001 client.go:171] LocalClient.Create took 12.748806729s
	I1117 12:35:14.415459   19001 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:35:14.415487   19001 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:35:14.415586   19001 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117123427-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:35:16.420553   19001 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:35:16.420642   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:35:16.594765   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:16.594891   19001 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:16.783127   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:35:16.923247   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:16.923373   19001 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:17.256685   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:35:17.376052   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:17.376139   19001 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:17.836494   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:35:17.955690   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	W1117 12:35:17.955779   19001 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:35:17.955803   19001 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:17.955816   19001 start.go:129] duration metric: createHost completed in 16.316124413s
	I1117 12:35:17.955880   19001 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:35:17.955943   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:35:18.072637   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:18.072736   19001 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:18.276542   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:35:18.408147   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:18.408226   19001 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:18.707640   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:35:18.836560   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:18.836671   19001 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:19.507625   19001 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:35:19.641207   19001 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	W1117 12:35:19.641312   19001 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:35:19.641361   19001 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:19.641380   19001 fix.go:57] fixHost completed within 35.840601732s
	I1117 12:35:19.641390   19001 start.go:80] releasing machines lock for "default-k8s-different-port-20211117123427-2067", held for 35.840656053s
	W1117 12:35:19.641584   19001 out.go:241] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20211117123427-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20211117123427-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:35:19.757147   19001 out.go:176] 
	W1117 12:35:19.757294   19001 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:35:19.757309   19001 out.go:241] * 
	* 
	W1117 12:35:19.758021   19001 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:35:19.884102   19001 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117123427-2067 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "800e4da8872f0b397fa1913a119449635250f6943aaee447bc2bd2e9ec985835",
	        "Created": "2021-11-17T20:35:02.118145241Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (173.674677ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:35:20.239685   19577 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (53.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20211117123224-2067" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "e0e6f7e28a87dee61f6590decac6a746393a7acc2e112cf0306796c62f45fcf4",
	        "Created": "2021-11-17T20:34:29.726471754Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (145.272512ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:47.804639   19211 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20211117123224-2067" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20211117123224-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20211117123224-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (39.132196ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117123224-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20211117123224-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "e0e6f7e28a87dee61f6590decac6a746393a7acc2e112cf0306796c62f45fcf4",
	        "Created": "2021-11-17T20:34:29.726471754Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (151.329416ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:48.106305   19222 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20211117123224-2067 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p no-preload-20211117123224-2067 "sudo crictl images -o json": exit status 80 (1.24805003s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p no-preload-20211117123224-2067 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "e0e6f7e28a87dee61f6590decac6a746393a7acc2e112cf0306796c62f45fcf4",
	        "Created": "2021-11-17T20:34:29.726471754Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (148.543646ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:49.609481   19243 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20211117123224-2067 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p no-preload-20211117123224-2067 --alsologtostderr -v=1: exit status 80 (203.870762ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:34:49.650722   19248 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:34:49.650915   19248 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:34:49.650921   19248 out.go:310] Setting ErrFile to fd 2...
	I1117 12:34:49.650924   19248 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:34:49.651012   19248 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:34:49.651186   19248 out.go:304] Setting JSON to false
	I1117 12:34:49.651202   19248 mustload.go:65] Loading cluster: no-preload-20211117123224-2067
	I1117 12:34:49.651423   19248 config.go:176] Loaded profile config "no-preload-20211117123224-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:34:49.651777   19248 cli_runner.go:115] Run: docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}
	W1117 12:34:49.754534   19248 cli_runner.go:162] docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:34:49.781807   19248 out.go:176] 
	W1117 12:34:49.782004   19248 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067
	
	W1117 12:34:49.782025   19248 out.go:241] * 
	* 
	W1117 12:34:49.785565   19248 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:34:49.812675   19248 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p no-preload-20211117123224-2067 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "e0e6f7e28a87dee61f6590decac6a746393a7acc2e112cf0306796c62f45fcf4",
	        "Created": "2021-11-17T20:34:29.726471754Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (146.394949ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:50.066254   19257 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117123224-2067
helpers_test.go:235: (dbg) docker inspect no-preload-20211117123224-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-20211117123224-2067",
	        "Id": "e0e6f7e28a87dee61f6590decac6a746393a7acc2e112cf0306796c62f45fcf4",
	        "Created": "2021-11-17T20:34:29.726471754Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117123224-2067 -n no-preload-20211117123224-2067: exit status 7 (146.1784ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:34:50.319872   19266 status.go:247] status error: host: state: unknown state "no-preload-20211117123224-2067": docker container inspect no-preload-20211117123224-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-20211117123224-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-20211117123224-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20211117123459-2067 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-20211117123459-2067 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 80 (49.43479514s)

                                                
                                                
-- stdout --
	* [newest-cni-20211117123459-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node newest-cni-20211117123459-2067 in cluster newest-cni-20211117123459-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20211117123459-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:34:59.699636   19367 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:34:59.699769   19367 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:34:59.699774   19367 out.go:310] Setting ErrFile to fd 2...
	I1117 12:34:59.699777   19367 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:34:59.699857   19367 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:34:59.700171   19367 out.go:304] Setting JSON to false
	I1117 12:34:59.724685   19367 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3874,"bootTime":1637177425,"procs":321,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:34:59.724771   19367 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:34:59.752596   19367 out.go:176] * [newest-cni-20211117123459-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:34:59.752775   19367 notify.go:174] Checking for updates...
	I1117 12:34:59.800278   19367 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:34:59.827137   19367 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:34:59.853016   19367 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:34:59.878109   19367 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:34:59.878535   19367 config.go:176] Loaded profile config "default-k8s-different-port-20211117123427-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:34:59.878613   19367 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:34:59.878656   19367 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:34:59.974683   19367 docker.go:132] docker version: linux-20.10.5
	I1117 12:34:59.974813   19367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:35:00.144103   19367 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:35:00.091993074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:35:00.171339   19367 out.go:176] * Using the docker driver based on user configuration
	I1117 12:35:00.171405   19367 start.go:280] selected driver: docker
	I1117 12:35:00.171417   19367 start.go:775] validating driver "docker" against <nil>
	I1117 12:35:00.171441   19367 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:35:00.174822   19367 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:35:00.349217   19367 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:35:00.295328856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:35:00.349328   19367 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	W1117 12:35:00.349356   19367 out.go:241] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1117 12:35:00.349477   19367 start_flags.go:777] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1117 12:35:00.349494   19367 cni.go:93] Creating CNI manager for ""
	I1117 12:35:00.349501   19367 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:35:00.349510   19367 start_flags.go:282] config:
	{Name:newest-cni-20211117123459-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:newest-cni-20211117123459-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:35:00.398337   19367 out.go:176] * Starting control plane node newest-cni-20211117123459-2067 in cluster newest-cni-20211117123459-2067
	I1117 12:35:00.398399   19367 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:35:00.453468   19367 out.go:176] * Pulling base image ...
	I1117 12:35:00.453567   19367 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:35:00.453625   19367 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:35:00.453695   19367 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 12:35:00.453738   19367 cache.go:57] Caching tarball of preloaded images
	I1117 12:35:00.454003   19367 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:35:00.454031   19367 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4-rc.0 on docker
	I1117 12:35:00.455368   19367 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/newest-cni-20211117123459-2067/config.json ...
	I1117 12:35:00.455584   19367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/newest-cni-20211117123459-2067/config.json: {Name:mk9c2c6586e7051527bc4fc50b28763566b77be1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:35:00.584442   19367 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:35:00.584468   19367 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:35:00.584484   19367 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:35:00.584527   19367 start.go:313] acquiring machines lock for newest-cni-20211117123459-2067: {Name:mk8c536102b388ea9752e9ca8e2ac2f69703a931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:35:00.584728   19367 start.go:317] acquired machines lock for "newest-cni-20211117123459-2067" in 188.699µs
	I1117 12:35:00.584759   19367 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20211117123459-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:newest-cni-20211117123459-2067 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:tr
ue}
	I1117 12:35:00.584814   19367 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:35:00.632376   19367 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:35:00.632875   19367 start.go:160] libmachine.API.Create for "newest-cni-20211117123459-2067" (driver="docker")
	I1117 12:35:00.632931   19367 client.go:168] LocalClient.Create starting
	I1117 12:35:00.633160   19367 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:35:00.633314   19367 main.go:130] libmachine: Decoding PEM data...
	I1117 12:35:00.633365   19367 main.go:130] libmachine: Parsing certificate...
	I1117 12:35:00.633457   19367 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:35:00.633529   19367 main.go:130] libmachine: Decoding PEM data...
	I1117 12:35:00.633560   19367 main.go:130] libmachine: Parsing certificate...
	I1117 12:35:00.634440   19367 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:35:00.741177   19367 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:35:00.741296   19367 network_create.go:254] running [docker network inspect newest-cni-20211117123459-2067] to gather additional debugging logs...
	I1117 12:35:00.741311   19367 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067
	W1117 12:35:00.845430   19367 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:00.845454   19367 network_create.go:257] error running [docker network inspect newest-cni-20211117123459-2067]: docker network inspect newest-cni-20211117123459-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20211117123459-2067
	I1117 12:35:00.845469   19367 network_create.go:259] output of [docker network inspect newest-cni-20211117123459-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20211117123459-2067
	
	** /stderr **
	I1117 12:35:00.845556   19367 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:35:00.949976   19367 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001162b0] misses:0}
	I1117 12:35:00.950010   19367 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:35:00.950033   19367 network_create.go:106] attempt to create docker network newest-cni-20211117123459-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:35:00.950114   19367 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067
	I1117 12:35:06.575637   19367 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067: (5.625524647s)
	I1117 12:35:06.575662   19367 network_create.go:90] docker network newest-cni-20211117123459-2067 192.168.49.0/24 created
	I1117 12:35:06.575680   19367 kic.go:106] calculated static IP "192.168.49.2" for the "newest-cni-20211117123459-2067" container
	I1117 12:35:06.575794   19367 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:35:06.678549   19367 cli_runner.go:115] Run: docker volume create newest-cni-20211117123459-2067 --label name.minikube.sigs.k8s.io=newest-cni-20211117123459-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:35:06.781539   19367 oci.go:102] Successfully created a docker volume newest-cni-20211117123459-2067
	I1117 12:35:06.781670   19367 cli_runner.go:115] Run: docker run --rm --name newest-cni-20211117123459-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117123459-2067 --entrypoint /usr/bin/test -v newest-cni-20211117123459-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:35:07.287059   19367 oci.go:106] Successfully prepared a docker volume newest-cni-20211117123459-2067
	E1117 12:35:07.287108   19367 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:35:07.287133   19367 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:35:07.287134   19367 client.go:171] LocalClient.Create took 6.654254541s
	I1117 12:35:07.287166   19367 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:35:07.287286   19367 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117123459-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:35:09.297120   19367 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:35:09.297205   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:09.416001   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:09.416095   19367 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:09.692856   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:09.809772   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:09.809859   19367 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:10.354509   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:10.474662   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:10.474739   19367 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:11.134323   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:11.275342   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:35:11.275452   19367 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:35:11.275478   19367 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:11.275491   19367 start.go:129] duration metric: createHost completed in 10.690769628s
	I1117 12:35:11.275499   19367 start.go:80] releasing machines lock for "newest-cni-20211117123459-2067", held for 10.690860449s
	W1117 12:35:11.275517   19367 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:35:11.276048   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:11.399379   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:11.399437   19367 delete.go:82] Unable to get host status for newest-cni-20211117123459-2067, assuming it has already been deleted: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	W1117 12:35:11.399601   19367 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:35:11.399617   19367 start.go:547] Will try again in 5 seconds ...
	I1117 12:35:13.342117   19367 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117123459-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.054846916s)
	I1117 12:35:13.342141   19367 kic.go:188] duration metric: took 6.055035 seconds to extract preloaded images to volume
	I1117 12:35:16.404910   19367 start.go:313] acquiring machines lock for newest-cni-20211117123459-2067: {Name:mk8c536102b388ea9752e9ca8e2ac2f69703a931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:35:16.405032   19367 start.go:317] acquired machines lock for "newest-cni-20211117123459-2067" in 91.783µs
	I1117 12:35:16.405059   19367 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:35:16.405067   19367 fix.go:55] fixHost starting: 
	I1117 12:35:16.405325   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:16.559720   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:16.559813   19367 fix.go:108] recreateIfNeeded on newest-cni-20211117123459-2067: state= err=unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:16.559865   19367 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:35:16.586848   19367 out.go:176] * docker "newest-cni-20211117123459-2067" container is missing, will recreate.
	I1117 12:35:16.586869   19367 delete.go:124] DEMOLISHING newest-cni-20211117123459-2067 ...
	I1117 12:35:16.587012   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:16.710319   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:16.710368   19367 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:16.710383   19367 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:16.710870   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:16.847191   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:16.847266   19367 delete.go:82] Unable to get host status for newest-cni-20211117123459-2067, assuming it has already been deleted: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:16.847376   19367 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117123459-2067
	W1117 12:35:16.973854   19367 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:16.973893   19367 kic.go:360] could not find the container newest-cni-20211117123459-2067 to remove it. will try anyways
	I1117 12:35:16.974001   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:17.106943   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:17.107011   19367 oci.go:83] error getting container status, will try to delete anyways: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:17.107121   19367 cli_runner.go:115] Run: docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0"
	W1117 12:35:17.224797   19367 cli_runner.go:162] docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:35:17.224830   19367 oci.go:656] error shutdown newest-cni-20211117123459-2067: docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:18.234357   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:18.363802   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:18.363858   19367 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:18.363871   19367 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:35:18.363900   19367 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:18.833997   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:18.973609   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:18.973681   19367 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:18.973696   19367 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:35:18.973736   19367 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:19.867898   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:20.009054   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:20.009099   19367 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:20.009113   19367 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:35:20.009142   19367 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:20.646804   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:20.760552   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:20.760633   19367 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:20.760913   19367 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:35:20.760967   19367 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:21.879162   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:21.986719   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:21.986761   19367 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:21.986771   19367 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:35:21.986795   19367 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:23.507740   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:23.613028   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:23.613070   19367 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:23.613079   19367 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:35:23.613099   19367 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:26.657714   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:26.764039   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:26.764082   19367 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:26.764092   19367 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:35:26.764115   19367 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:32.556385   19367 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:32.661298   19367 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:32.661346   19367 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:32.661356   19367 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:35:32.661396   19367 oci.go:87] couldn't shut down newest-cni-20211117123459-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	 
	I1117 12:35:32.661477   19367 cli_runner.go:115] Run: docker rm -f -v newest-cni-20211117123459-2067
	I1117 12:35:32.762945   19367 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117123459-2067
	W1117 12:35:32.865221   19367 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:32.865327   19367 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:35:32.968033   19367 cli_runner.go:115] Run: docker network rm newest-cni-20211117123459-2067
	I1117 12:35:36.528220   19367 cli_runner.go:168] Completed: docker network rm newest-cni-20211117123459-2067: (3.560125889s)
	W1117 12:35:36.528497   19367 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:35:36.528507   19367 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:35:37.532336   19367 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:35:37.558514   19367 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:35:37.558679   19367 start.go:160] libmachine.API.Create for "newest-cni-20211117123459-2067" (driver="docker")
	I1117 12:35:37.558711   19367 client.go:168] LocalClient.Create starting
	I1117 12:35:37.558897   19367 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:35:37.558977   19367 main.go:130] libmachine: Decoding PEM data...
	I1117 12:35:37.559004   19367 main.go:130] libmachine: Parsing certificate...
	I1117 12:35:37.559111   19367 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:35:37.559171   19367 main.go:130] libmachine: Decoding PEM data...
	I1117 12:35:37.559188   19367 main.go:130] libmachine: Parsing certificate...
	I1117 12:35:37.560118   19367 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:35:37.674800   19367 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:35:37.674935   19367 network_create.go:254] running [docker network inspect newest-cni-20211117123459-2067] to gather additional debugging logs...
	I1117 12:35:37.674960   19367 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067
	W1117 12:35:37.816128   19367 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:37.816154   19367 network_create.go:257] error running [docker network inspect newest-cni-20211117123459-2067]: docker network inspect newest-cni-20211117123459-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20211117123459-2067
	I1117 12:35:37.816173   19367 network_create.go:259] output of [docker network inspect newest-cni-20211117123459-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20211117123459-2067
	
	** /stderr **
	I1117 12:35:37.816254   19367 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:35:37.931348   19367 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001162b0] amended:false}} dirty:map[] misses:0}
	I1117 12:35:37.931384   19367 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:35:37.931608   19367 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001162b0] amended:true}} dirty:map[192.168.49.0:0xc0001162b0 192.168.58.0:0xc00072a0b0] misses:0}
	I1117 12:35:37.931624   19367 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:35:37.931633   19367 network_create.go:106] attempt to create docker network newest-cni-20211117123459-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:35:37.931729   19367 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067
	W1117 12:35:38.065292   19367 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:35:38.065333   19367 network_create.go:98] failed to create docker network newest-cni-20211117123459-2067 192.168.58.0/24, will retry: subnet is taken
	I1117 12:35:38.065552   19367 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001162b0] amended:true}} dirty:map[192.168.49.0:0xc0001162b0 192.168.58.0:0xc00072a0b0] misses:1}
	I1117 12:35:38.065570   19367 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:35:38.065746   19367 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001162b0] amended:true}} dirty:map[192.168.49.0:0xc0001162b0 192.168.58.0:0xc00072a0b0 192.168.67.0:0xc00065c9c0] misses:1}
	I1117 12:35:38.065757   19367 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:35:38.065763   19367 network_create.go:106] attempt to create docker network newest-cni-20211117123459-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:35:38.065837   19367 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067
	I1117 12:35:43.213765   19367 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067: (5.147910889s)
	I1117 12:35:43.213788   19367 network_create.go:90] docker network newest-cni-20211117123459-2067 192.168.67.0/24 created
	I1117 12:35:43.213806   19367 kic.go:106] calculated static IP "192.168.67.2" for the "newest-cni-20211117123459-2067" container
	I1117 12:35:43.213921   19367 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:35:43.318225   19367 cli_runner.go:115] Run: docker volume create newest-cni-20211117123459-2067 --label name.minikube.sigs.k8s.io=newest-cni-20211117123459-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:35:43.424430   19367 oci.go:102] Successfully created a docker volume newest-cni-20211117123459-2067
	I1117 12:35:43.424553   19367 cli_runner.go:115] Run: docker run --rm --name newest-cni-20211117123459-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117123459-2067 --entrypoint /usr/bin/test -v newest-cni-20211117123459-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:35:43.829969   19367 oci.go:106] Successfully prepared a docker volume newest-cni-20211117123459-2067
	E1117 12:35:43.830047   19367 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:35:43.830058   19367 client.go:171] LocalClient.Create took 6.271399703s
	I1117 12:35:43.830051   19367 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:35:43.830082   19367 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:35:43.830190   19367 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117123459-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:35:45.832618   19367 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:35:45.832818   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:45.969965   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:45.970062   19367 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:46.158124   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:46.275939   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:46.276027   19367 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:46.608107   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:46.729451   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:46.729551   19367 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:47.190019   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:47.313900   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:35:47.313985   19367 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:35:47.314005   19367 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:47.314014   19367 start.go:129] duration metric: createHost completed in 9.781752353s
	I1117 12:35:47.314082   19367 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:35:47.314143   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:47.436471   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:47.436572   19367 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:47.632717   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:47.756766   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:47.756903   19367 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:48.060667   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:48.187215   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:35:48.187295   19367 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:48.855229   19367 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:35:48.963261   19367 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:35:48.963360   19367 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:35:48.963379   19367 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:48.963389   19367 fix.go:57] fixHost completed within 32.558618585s
	I1117 12:35:48.963404   19367 start.go:80] releasing machines lock for "newest-cni-20211117123459-2067", held for 32.558659379s
	W1117 12:35:48.963601   19367 out.go:241] * Failed to start docker container. Running "minikube delete -p newest-cni-20211117123459-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p newest-cni-20211117123459-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:35:49.012887   19367 out.go:176] 
	W1117 12:35:49.013019   19367 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:35:49.013028   19367 out.go:241] * 
	* 
	W1117 12:35:49.013616   19367 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:35:49.090933   19367 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p newest-cni-20211117123459-2067 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117123459-2067
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117123459-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117123459-2067",
	        "Id": "b40e038bf6ea0baf7b528ac1f0fa01dd84cf53da3a4dbfa17123d23fc9c12340",
	        "Created": "2021-11-17T20:35:38.194681073Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067: exit status 7 (181.454107ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:35:49.397867   19865 status.go:247] status error: host: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117123459-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (49.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211117123427-2067 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117123427-2067 create -f testdata/busybox.yaml: exit status 1 (49.374097ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117123427-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context default-k8s-different-port-20211117123427-2067 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "800e4da8872f0b397fa1913a119449635250f6943aaee447bc2bd2e9ec985835",
	        "Created": "2021-11-17T20:35:02.118145241Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (145.811751ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:35:20.552972   19587 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "800e4da8872f0b397fa1913a119449635250f6943aaee447bc2bd2e9ec985835",
	        "Created": "2021-11-17T20:35:02.118145241Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (154.492029ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:35:20.814891   19597 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20211117123427-2067 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20211117123427-2067 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117123427-2067 describe deploy/metrics-server -n kube-system: exit status 1 (39.681127ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117123427-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20211117123427-2067 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "800e4da8872f0b397fa1913a119449635250f6943aaee447bc2bd2e9ec985835",
	        "Created": "2021-11-17T20:35:02.118145241Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (189.161526ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:35:21.366271   19615 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (14.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117123427-2067 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117123427-2067 --alsologtostderr -v=3: exit status 82 (14.715983647s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	* Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	* Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	* Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	* Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	* Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:35:21.411814   19620 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:35:21.411953   19620 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:35:21.411957   19620 out.go:310] Setting ErrFile to fd 2...
	I1117 12:35:21.411960   19620 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:35:21.412051   19620 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:35:21.412222   19620 out.go:304] Setting JSON to false
	I1117 12:35:21.412379   19620 mustload.go:65] Loading cluster: default-k8s-different-port-20211117123427-2067
	I1117 12:35:21.412603   19620 config.go:176] Loaded profile config "default-k8s-different-port-20211117123427-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:35:21.412648   19620 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/default-k8s-different-port-20211117123427-2067/config.json ...
	I1117 12:35:21.412978   19620 mustload.go:65] Loading cluster: default-k8s-different-port-20211117123427-2067
	I1117 12:35:21.413063   19620 config.go:176] Loaded profile config "default-k8s-different-port-20211117123427-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:35:21.413094   19620 stop.go:39] StopHost: default-k8s-different-port-20211117123427-2067
	I1117 12:35:21.440359   19620 out.go:176] * Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	I1117 12:35:21.440555   19620 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:21.546544   19620 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:21.546603   19620 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	W1117 12:35:21.546622   19620 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:21.546643   19620 retry.go:31] will retry after 1.104660288s: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:22.657536   19620 stop.go:39] StopHost: default-k8s-different-port-20211117123427-2067
	I1117 12:35:22.685223   19620 out.go:176] * Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	I1117 12:35:22.685562   19620 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:22.791757   19620 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:22.791798   19620 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	W1117 12:35:22.791815   19620 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:22.791831   19620 retry.go:31] will retry after 2.160763633s: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:24.957524   19620 stop.go:39] StopHost: default-k8s-different-port-20211117123427-2067
	I1117 12:35:24.985090   19620 out.go:176] * Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	I1117 12:35:24.985394   19620 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:25.091565   19620 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:25.091604   19620 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	W1117 12:35:25.091617   19620 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:25.091631   19620 retry.go:31] will retry after 2.62026012s: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:27.716018   19620 stop.go:39] StopHost: default-k8s-different-port-20211117123427-2067
	I1117 12:35:27.743322   19620 out.go:176] * Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	I1117 12:35:27.743564   19620 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:27.849660   19620 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:27.849724   19620 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	W1117 12:35:27.849743   19620 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:27.849759   19620 retry.go:31] will retry after 3.164785382s: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:31.015982   19620 stop.go:39] StopHost: default-k8s-different-port-20211117123427-2067
	I1117 12:35:31.043330   19620 out.go:176] * Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	I1117 12:35:31.043576   19620 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:31.150231   19620 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:31.150279   19620 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	W1117 12:35:31.150304   19620 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:31.150321   19620 retry.go:31] will retry after 4.680977329s: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:35.832397   19620 stop.go:39] StopHost: default-k8s-different-port-20211117123427-2067
	I1117 12:35:35.859653   19620 out.go:176] * Stopping node "default-k8s-different-port-20211117123427-2067"  ...
	I1117 12:35:35.859812   19620 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:35.958851   19620 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:35.958897   19620 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	W1117 12:35:35.958912   19620 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:35.985705   19620 out.go:176] 
	W1117 12:35:35.985957   19620 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20211117123427-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-different-port-20211117123427-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:35:35.985986   19620 out.go:241] * 
	* 
	W1117 12:35:35.992156   19620 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:35:36.067387   19620 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117123427-2067 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "800e4da8872f0b397fa1913a119449635250f6943aaee447bc2bd2e9ec985835",
	        "Created": "2021-11-17T20:35:02.118145241Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (143.32996ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:35:36.341115   19681 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (14.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (145.116563ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:35:36.486352   19686 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20211117123427-2067 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "800e4da8872f0b397fa1913a119449635250f6943aaee447bc2bd2e9ec985835",
	        "Created": "2021-11-17T20:35:02.118145241Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (144.249687ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:35:36.965391   19700 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (76.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117123427-2067 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117123427-2067 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3: exit status 80 (1m16.426535963s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211117123427-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20211117123427-2067 in cluster default-k8s-different-port-20211117123427-2067
	* Pulling base image ...
	* docker "default-k8s-different-port-20211117123427-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-different-port-20211117123427-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:35:37.007177   19705 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:35:37.007310   19705 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:35:37.007315   19705 out.go:310] Setting ErrFile to fd 2...
	I1117 12:35:37.007318   19705 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:35:37.007397   19705 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:35:37.007674   19705 out.go:304] Setting JSON to false
	I1117 12:35:37.034953   19705 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3912,"bootTime":1637177425,"procs":321,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:35:37.035047   19705 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:35:37.061386   19705 out.go:176] * [default-k8s-different-port-20211117123427-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:35:37.061610   19705 notify.go:174] Checking for updates...
	I1117 12:35:37.109274   19705 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:35:37.135094   19705 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:35:37.164304   19705 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:35:37.190266   19705 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:35:37.190966   19705 config.go:176] Loaded profile config "default-k8s-different-port-20211117123427-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:35:37.191580   19705 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:35:37.284400   19705 docker.go:132] docker version: linux-20.10.5
	I1117 12:35:37.284545   19705 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:35:37.449054   19705 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:35:37.407566102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:35:37.497222   19705 out.go:176] * Using the docker driver based on existing profile
	I1117 12:35:37.497290   19705 start.go:280] selected driver: docker
	I1117 12:35:37.497302   19705 start.go:775] validating driver "docker" against &{Name:default-k8s-different-port-20211117123427-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211117123427-2067 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:35:37.497407   19705 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:35:37.501048   19705 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:35:37.680334   19705 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:35:37.633592276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:35:37.680474   19705 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:35:37.680494   19705 cni.go:93] Creating CNI manager for ""
	I1117 12:35:37.680502   19705 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:35:37.680513   19705 start_flags.go:282] config:
	{Name:default-k8s-different-port-20211117123427-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:default-k8s-different-port-20211117123427-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:35:37.707329   19705 out.go:176] * Starting control plane node default-k8s-different-port-20211117123427-2067 in cluster default-k8s-different-port-20211117123427-2067
	I1117 12:35:37.707416   19705 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:35:37.733190   19705 out.go:176] * Pulling base image ...
	I1117 12:35:37.733266   19705 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:35:37.733333   19705 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:35:37.733355   19705 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:35:37.733365   19705 cache.go:57] Caching tarball of preloaded images
	I1117 12:35:37.733582   19705 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:35:37.733612   19705 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:35:37.734680   19705 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/default-k8s-different-port-20211117123427-2067/config.json ...
	I1117 12:35:37.862752   19705 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:35:37.862772   19705 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:35:37.862785   19705 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:35:37.862866   19705 start.go:313] acquiring machines lock for default-k8s-different-port-20211117123427-2067: {Name:mk77409e95c4c1e3bbfbfb2785de5cabcca9e8cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:35:37.862981   19705 start.go:317] acquired machines lock for "default-k8s-different-port-20211117123427-2067" in 91.881µs
	I1117 12:35:37.863006   19705 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:35:37.863016   19705 fix.go:55] fixHost starting: 
	I1117 12:35:37.863266   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:37.974698   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:37.974763   19705 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211117123427-2067: state= err=unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:37.974790   19705 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:35:38.001650   19705 out.go:176] * docker "default-k8s-different-port-20211117123427-2067" container is missing, will recreate.
	I1117 12:35:38.001753   19705 delete.go:124] DEMOLISHING default-k8s-different-port-20211117123427-2067 ...
	I1117 12:35:38.002001   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:38.120284   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:38.120332   19705 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:38.120348   19705 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:38.120792   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:38.232688   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:38.232745   19705 delete.go:82] Unable to get host status for default-k8s-different-port-20211117123427-2067, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:38.232843   19705 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067
	W1117 12:35:38.335785   19705 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:38.335820   19705 kic.go:360] could not find the container default-k8s-different-port-20211117123427-2067 to remove it. will try anyways
	I1117 12:35:38.335920   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:38.439147   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:38.439199   19705 oci.go:83] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:38.439295   19705 cli_runner.go:115] Run: docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0"
	W1117 12:35:38.543481   19705 cli_runner.go:162] docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:35:38.543507   19705 oci.go:656] error shutdown default-k8s-different-port-20211117123427-2067: docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:39.544400   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:39.646003   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:39.646048   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:39.646064   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:39.646099   19705 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:40.205594   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:40.307860   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:40.307906   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:40.307916   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:40.307950   19705 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:41.389122   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:41.491892   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:41.491931   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:41.491950   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:41.491972   19705 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:42.809462   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:42.913039   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:42.913080   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:42.913090   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:42.913111   19705 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:44.502351   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:44.627943   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:44.627986   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:44.627995   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:44.628019   19705 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:46.973588   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:47.096229   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:47.096283   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:47.096303   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:47.096329   19705 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:51.607423   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:51.712367   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:51.712407   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:51.712424   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:51.712447   19705 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:54.936684   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:35:55.044615   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:35:55.044657   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:35:55.044664   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:35:55.044692   19705 oci.go:87] couldn't shut down default-k8s-different-port-20211117123427-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	 
	I1117 12:35:55.044777   19705 cli_runner.go:115] Run: docker rm -f -v default-k8s-different-port-20211117123427-2067
	I1117 12:35:55.149982   19705 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067
	W1117 12:35:55.253725   19705 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:55.253912   19705 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:35:55.357811   19705 cli_runner.go:115] Run: docker network rm default-k8s-different-port-20211117123427-2067
	I1117 12:35:58.593310   19705 cli_runner.go:168] Completed: docker network rm default-k8s-different-port-20211117123427-2067: (3.235479013s)
	W1117 12:35:58.594015   19705 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:35:58.594023   19705 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:35:59.598353   19705 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:35:59.625475   19705 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:35:59.625629   19705 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211117123427-2067" (driver="docker")
	I1117 12:35:59.625669   19705 client.go:168] LocalClient.Create starting
	I1117 12:35:59.625857   19705 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:35:59.625989   19705 main.go:130] libmachine: Decoding PEM data...
	I1117 12:35:59.626041   19705 main.go:130] libmachine: Parsing certificate...
	I1117 12:35:59.626165   19705 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:35:59.626234   19705 main.go:130] libmachine: Decoding PEM data...
	I1117 12:35:59.626263   19705 main.go:130] libmachine: Parsing certificate...
	I1117 12:35:59.648112   19705 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:35:59.773447   19705 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:35:59.773543   19705 network_create.go:254] running [docker network inspect default-k8s-different-port-20211117123427-2067] to gather additional debugging logs...
	I1117 12:35:59.773567   19705 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067
	W1117 12:35:59.874137   19705 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:35:59.874168   19705 network_create.go:257] error running [docker network inspect default-k8s-different-port-20211117123427-2067]: docker network inspect default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20211117123427-2067
	I1117 12:35:59.874180   19705 network_create.go:259] output of [docker network inspect default-k8s-different-port-20211117123427-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20211117123427-2067
	
	** /stderr **
	I1117 12:35:59.874264   19705 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:35:59.977217   19705 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005322d8] misses:0}
	I1117 12:35:59.977255   19705 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:35:59.977287   19705 network_create.go:106] attempt to create docker network default-k8s-different-port-20211117123427-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:35:59.977363   19705 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067
	I1117 12:36:04.840127   19705 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067: (4.862749012s)
	I1117 12:36:04.840156   19705 network_create.go:90] docker network default-k8s-different-port-20211117123427-2067 192.168.49.0/24 created
	I1117 12:36:04.840177   19705 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20211117123427-2067" container
	I1117 12:36:04.840282   19705 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:36:04.951051   19705 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20211117123427-2067 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117123427-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:36:05.062838   19705 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211117123427-2067
	I1117 12:36:05.062966   19705 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20211117123427-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117123427-2067 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117123427-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:36:05.668390   19705 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211117123427-2067
	E1117 12:36:05.668443   19705 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:36:05.668446   19705 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:36:05.668462   19705 client.go:171] LocalClient.Create took 6.042839424s
	I1117 12:36:05.668476   19705 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:36:05.668589   19705 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117123427-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:36:07.668787   19705 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:36:07.668888   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:07.791932   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:07.792016   19705 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:07.949532   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:08.093413   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:08.093494   19705 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:08.399762   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:08.516285   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:08.516377   19705 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:09.088519   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:09.199356   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	W1117 12:36:09.199440   19705 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:36:09.199458   19705 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:09.199472   19705 start.go:129] duration metric: createHost completed in 9.601184781s
	I1117 12:36:09.199530   19705 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:36:09.199586   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:09.360810   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:09.360914   19705 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:09.544381   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:09.659835   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:09.659949   19705 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:09.990634   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:10.137290   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:10.137411   19705 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:10.602325   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:10.722909   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	W1117 12:36:10.722992   19705 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:36:10.723031   19705 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:10.723042   19705 fix.go:57] fixHost completed within 32.860327529s
	I1117 12:36:10.723051   19705 start.go:80] releasing machines lock for "default-k8s-different-port-20211117123427-2067", held for 32.860361557s
	W1117 12:36:10.723067   19705 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:36:10.723194   19705 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:36:10.723201   19705 start.go:547] Will try again in 5 seconds ...
	I1117 12:36:12.021151   19705 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117123427-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.352553432s)
	I1117 12:36:12.021174   19705 kic.go:188] duration metric: took 6.352756 seconds to extract preloaded images to volume
	I1117 12:36:15.732062   19705 start.go:313] acquiring machines lock for default-k8s-different-port-20211117123427-2067: {Name:mk77409e95c4c1e3bbfbfb2785de5cabcca9e8cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:36:15.732196   19705 start.go:317] acquired machines lock for "default-k8s-different-port-20211117123427-2067" in 110.061µs
	I1117 12:36:15.732236   19705 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:36:15.732245   19705 fix.go:55] fixHost starting: 
	I1117 12:36:15.732705   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:15.836149   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:15.836189   19705 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211117123427-2067: state= err=unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:15.836199   19705 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:36:15.885294   19705 out.go:176] * docker "default-k8s-different-port-20211117123427-2067" container is missing, will recreate.
	I1117 12:36:15.885320   19705 delete.go:124] DEMOLISHING default-k8s-different-port-20211117123427-2067 ...
	I1117 12:36:15.885542   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:15.990308   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:36:15.990359   19705 stop.go:75] unable to get state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:15.990382   19705 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:15.990850   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:16.094869   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:16.094920   19705 delete.go:82] Unable to get host status for default-k8s-different-port-20211117123427-2067, assuming it has already been deleted: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:16.095006   19705 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067
	W1117 12:36:16.198835   19705 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:16.198866   19705 kic.go:360] could not find the container default-k8s-different-port-20211117123427-2067 to remove it. will try anyways
	I1117 12:36:16.198964   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:16.301867   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:36:16.301937   19705 oci.go:83] error getting container status, will try to delete anyways: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:16.302064   19705 cli_runner.go:115] Run: docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0"
	W1117 12:36:16.405842   19705 cli_runner.go:162] docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:36:16.405872   19705 oci.go:656] error shutdown default-k8s-different-port-20211117123427-2067: docker exec --privileged -t default-k8s-different-port-20211117123427-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:17.416772   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:17.521619   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:17.521660   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:17.521676   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:36:17.521701   19705 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:17.915746   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:18.020127   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:18.020167   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:18.020183   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:36:18.020207   19705 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:18.617955   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:18.722249   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:18.722289   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:18.722297   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:36:18.722321   19705 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:20.057307   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:20.161738   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:20.161776   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:20.161786   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:36:20.161807   19705 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:21.382293   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:21.486449   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:21.486489   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:21.486498   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:36:21.486521   19705 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:23.276859   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:23.377787   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:23.377825   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:23.377836   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:36:23.377859   19705 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:26.655349   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:26.755945   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:26.756002   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:26.756013   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:36:26.756036   19705 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:32.854100   19705 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:32.951904   19705 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:32.951943   19705 oci.go:668] temporary error verifying shutdown: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:32.951961   19705 oci.go:670] temporary error: container default-k8s-different-port-20211117123427-2067 status is  but expect it to be exited
	I1117 12:36:32.951994   19705 oci.go:87] couldn't shut down default-k8s-different-port-20211117123427-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	 
	I1117 12:36:32.952080   19705 cli_runner.go:115] Run: docker rm -f -v default-k8s-different-port-20211117123427-2067
	I1117 12:36:33.051848   19705 cli_runner.go:115] Run: docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067
	W1117 12:36:33.150557   19705 cli_runner.go:162] docker container inspect -f {{.Id}} default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:33.150676   19705 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:36:33.251135   19705 cli_runner.go:115] Run: docker network rm default-k8s-different-port-20211117123427-2067
	I1117 12:36:40.133839   19705 cli_runner.go:168] Completed: docker network rm default-k8s-different-port-20211117123427-2067: (6.882701969s)
	W1117 12:36:40.134356   19705 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:36:40.134363   19705 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:36:41.138177   19705 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:36:41.185733   19705 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:36:41.185827   19705 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211117123427-2067" (driver="docker")
	I1117 12:36:41.185854   19705 client.go:168] LocalClient.Create starting
	I1117 12:36:41.185965   19705 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:36:41.186021   19705 main.go:130] libmachine: Decoding PEM data...
	I1117 12:36:41.186042   19705 main.go:130] libmachine: Parsing certificate...
	I1117 12:36:41.186090   19705 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:36:41.186121   19705 main.go:130] libmachine: Decoding PEM data...
	I1117 12:36:41.186137   19705 main.go:130] libmachine: Parsing certificate...
	I1117 12:36:41.186757   19705 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:36:41.297556   19705 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:36:41.297680   19705 network_create.go:254] running [docker network inspect default-k8s-different-port-20211117123427-2067] to gather additional debugging logs...
	I1117 12:36:41.297699   19705 cli_runner.go:115] Run: docker network inspect default-k8s-different-port-20211117123427-2067
	W1117 12:36:41.399877   19705 cli_runner.go:162] docker network inspect default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:41.399903   19705 network_create.go:257] error running [docker network inspect default-k8s-different-port-20211117123427-2067]: docker network inspect default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20211117123427-2067
	I1117 12:36:41.399918   19705 network_create.go:259] output of [docker network inspect default-k8s-different-port-20211117123427-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20211117123427-2067
	
	** /stderr **
	I1117 12:36:41.400022   19705 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:36:41.503912   19705 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005322d8] amended:false}} dirty:map[] misses:0}
	I1117 12:36:41.503943   19705 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:36:41.504111   19705 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005322d8] amended:true}} dirty:map[192.168.49.0:0xc0005322d8 192.168.58.0:0xc000274540] misses:0}
	I1117 12:36:41.504123   19705 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:36:41.504130   19705 network_create.go:106] attempt to create docker network default-k8s-different-port-20211117123427-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:36:41.504208   19705 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067
	W1117 12:36:41.604775   19705 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067 returned with exit code 1
	W1117 12:36:41.604824   19705 network_create.go:98] failed to create docker network default-k8s-different-port-20211117123427-2067 192.168.58.0/24, will retry: subnet is taken
	I1117 12:36:41.605088   19705 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005322d8] amended:true}} dirty:map[192.168.49.0:0xc0005322d8 192.168.58.0:0xc000274540] misses:1}
	I1117 12:36:41.605108   19705 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:36:41.605375   19705 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005322d8] amended:true}} dirty:map[192.168.49.0:0xc0005322d8 192.168.58.0:0xc000274540 192.168.67.0:0xc0006d63c0] misses:1}
	I1117 12:36:41.605395   19705 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:36:41.605402   19705 network_create.go:106] attempt to create docker network default-k8s-different-port-20211117123427-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:36:41.605519   19705 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067
	I1117 12:36:46.833494   19705 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211117123427-2067: (5.227982993s)
	I1117 12:36:46.833522   19705 network_create.go:90] docker network default-k8s-different-port-20211117123427-2067 192.168.67.0/24 created
	I1117 12:36:46.833547   19705 kic.go:106] calculated static IP "192.168.67.2" for the "default-k8s-different-port-20211117123427-2067" container
	I1117 12:36:46.833651   19705 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:36:46.936125   19705 cli_runner.go:115] Run: docker volume create default-k8s-different-port-20211117123427-2067 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117123427-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:36:47.059541   19705 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211117123427-2067
	I1117 12:36:47.059720   19705 cli_runner.go:115] Run: docker run --rm --name default-k8s-different-port-20211117123427-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211117123427-2067 --entrypoint /usr/bin/test -v default-k8s-different-port-20211117123427-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:36:47.545746   19705 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211117123427-2067
	E1117 12:36:47.545796   19705 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:36:47.545809   19705 client.go:171] LocalClient.Create took 6.360004547s
	I1117 12:36:47.545811   19705 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:36:47.545837   19705 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:36:47.545951   19705 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211117123427-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:36:49.545983   19705 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:36:49.546081   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:49.675676   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:49.675780   19705 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:49.881136   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:50.008688   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:50.008799   19705 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:50.314653   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:50.432485   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:50.432606   19705 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:51.137601   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:51.266529   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	W1117 12:36:51.266613   19705 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:36:51.266628   19705 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:51.266646   19705 start.go:129] duration metric: createHost completed in 10.12852395s
	I1117 12:36:51.266712   19705 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:36:51.266781   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:51.387214   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:51.387295   19705 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:51.731318   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:51.857289   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:51.857413   19705 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:52.315006   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:52.436759   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	I1117 12:36:52.436873   19705 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:53.014720   19705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067
	W1117 12:36:53.125852   19705 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067 returned with exit code 1
	W1117 12:36:53.125934   19705 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:36:53.125950   19705 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-different-port-20211117123427-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211117123427-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	I1117 12:36:53.125969   19705 fix.go:57] fixHost completed within 37.394064471s
	I1117 12:36:53.125977   19705 start.go:80] releasing machines lock for "default-k8s-different-port-20211117123427-2067", held for 37.394110153s
	W1117 12:36:53.126136   19705 out.go:241] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20211117123427-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20211117123427-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:36:53.201657   19705 out.go:176] 
	W1117 12:36:53.201816   19705 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:36:53.201834   19705 out.go:241] * 
	* 
	W1117 12:36:53.202492   19705 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:36:53.377035   19705 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117123427-2067 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "15d6150b6e89d93baafe84011e652df83d5affb8a4012fa2403722c812077d7a",
	        "Created": "2021-11-17T20:36:41.720799007Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (154.183053ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:53.677865   20389 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (76.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (15.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20211117123459-2067 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p newest-cni-20211117123459-2067 --alsologtostderr -v=3: exit status 82 (14.781452599s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-20211117123459-2067"  ...
	* Stopping node "newest-cni-20211117123459-2067"  ...
	* Stopping node "newest-cni-20211117123459-2067"  ...
	* Stopping node "newest-cni-20211117123459-2067"  ...
	* Stopping node "newest-cni-20211117123459-2067"  ...
	* Stopping node "newest-cni-20211117123459-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:35:49.762415   19875 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:35:49.762619   19875 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:35:49.762624   19875 out.go:310] Setting ErrFile to fd 2...
	I1117 12:35:49.762627   19875 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:35:49.762692   19875 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:35:49.762853   19875 out.go:304] Setting JSON to false
	I1117 12:35:49.762986   19875 mustload.go:65] Loading cluster: newest-cni-20211117123459-2067
	I1117 12:35:49.763205   19875 config.go:176] Loaded profile config "newest-cni-20211117123459-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:35:49.763252   19875 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/newest-cni-20211117123459-2067/config.json ...
	I1117 12:35:49.763552   19875 mustload.go:65] Loading cluster: newest-cni-20211117123459-2067
	I1117 12:35:49.763637   19875 config.go:176] Loaded profile config "newest-cni-20211117123459-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:35:49.763671   19875 stop.go:39] StopHost: newest-cni-20211117123459-2067
	I1117 12:35:49.818850   19875 out.go:176] * Stopping node "newest-cni-20211117123459-2067"  ...
	I1117 12:35:49.819101   19875 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:49.938471   19875 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:49.938542   19875 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	W1117 12:35:49.938566   19875 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:49.938596   19875 retry.go:31] will retry after 1.104660288s: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:51.048729   19875 stop.go:39] StopHost: newest-cni-20211117123459-2067
	I1117 12:35:51.088135   19875 out.go:176] * Stopping node "newest-cni-20211117123459-2067"  ...
	I1117 12:35:51.089146   19875 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:51.201723   19875 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:51.201763   19875 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	W1117 12:35:51.201791   19875 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:51.201810   19875 retry.go:31] will retry after 2.160763633s: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:53.365788   19875 stop.go:39] StopHost: newest-cni-20211117123459-2067
	I1117 12:35:53.393181   19875 out.go:176] * Stopping node "newest-cni-20211117123459-2067"  ...
	I1117 12:35:53.393439   19875 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:53.497657   19875 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:53.497699   19875 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	W1117 12:35:53.497711   19875 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:53.497738   19875 retry.go:31] will retry after 2.62026012s: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:56.119888   19875 stop.go:39] StopHost: newest-cni-20211117123459-2067
	I1117 12:35:56.147125   19875 out.go:176] * Stopping node "newest-cni-20211117123459-2067"  ...
	I1117 12:35:56.147391   19875 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:56.248306   19875 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:56.248347   19875 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	W1117 12:35:56.248357   19875 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:56.248375   19875 retry.go:31] will retry after 3.164785382s: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:59.414416   19875 stop.go:39] StopHost: newest-cni-20211117123459-2067
	I1117 12:35:59.441924   19875 out.go:176] * Stopping node "newest-cni-20211117123459-2067"  ...
	I1117 12:35:59.442224   19875 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:35:59.547723   19875 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:35:59.547761   19875 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	W1117 12:35:59.547770   19875 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:35:59.547788   19875 retry.go:31] will retry after 4.680977329s: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:04.232493   19875 stop.go:39] StopHost: newest-cni-20211117123459-2067
	I1117 12:36:04.259626   19875 out.go:176] * Stopping node "newest-cni-20211117123459-2067"  ...
	I1117 12:36:04.259736   19875 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:04.376500   19875 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:36:04.376536   19875 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	W1117 12:36:04.376547   19875 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:04.403328   19875 out.go:176] 
	W1117 12:36:04.403426   19875 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20211117123459-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-20211117123459-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:36:04.403438   19875 out.go:241] * 
	* 
	W1117 12:36:04.406343   19875 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:36:04.482144   19875 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p newest-cni-20211117123459-2067 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117123459-2067
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117123459-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117123459-2067",
	        "Id": "b40e038bf6ea0baf7b528ac1f0fa01dd84cf53da3a4dbfa17123d23fc9c12340",
	        "Created": "2021-11-17T20:35:38.194681073Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067: exit status 7 (146.38611ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:04.753245   19947 status.go:247] status error: host: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117123459-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (15.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067: exit status 7 (151.538124ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:04.904771   19952 status.go:247] status error: host: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20211117123459-2067 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117123459-2067
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117123459-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117123459-2067",
	        "Id": "b40e038bf6ea0baf7b528ac1f0fa01dd84cf53da3a4dbfa17123d23fc9c12340",
	        "Created": "2021-11-17T20:35:38.194681073Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067: exit status 7 (159.666688ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:05.419691   19978 status.go:247] status error: host: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117123459-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (72.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20211117123459-2067 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-20211117123459-2067 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 80 (1m12.176525614s)

                                                
                                                
-- stdout --
	* [newest-cni-20211117123459-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node newest-cni-20211117123459-2067 in cluster newest-cni-20211117123459-2067
	* Pulling base image ...
	* docker "newest-cni-20211117123459-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-20211117123459-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:36:05.467683   19984 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:36:05.467856   19984 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:36:05.467862   19984 out.go:310] Setting ErrFile to fd 2...
	I1117 12:36:05.467865   19984 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:36:05.467959   19984 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:36:05.468295   19984 out.go:304] Setting JSON to false
	I1117 12:36:05.493724   19984 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3940,"bootTime":1637177425,"procs":337,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:36:05.493830   19984 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:36:05.538134   19984 out.go:176] * [newest-cni-20211117123459-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:36:05.538296   19984 notify.go:174] Checking for updates...
	I1117 12:36:05.601415   19984 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:36:05.627981   19984 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:36:05.666075   19984 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:36:05.691978   19984 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:36:05.692426   19984 config.go:176] Loaded profile config "newest-cni-20211117123459-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:36:05.692752   19984 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:36:05.799249   19984 docker.go:132] docker version: linux-20.10.5
	I1117 12:36:05.799358   19984 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:36:05.967992   19984 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:64 SystemTime:2021-11-17 20:36:05.921001307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:36:05.996561   19984 out.go:176] * Using the docker driver based on existing profile
	I1117 12:36:05.996621   19984 start.go:280] selected driver: docker
	I1117 12:36:05.996634   19984 start.go:775] validating driver "docker" against &{Name:newest-cni-20211117123459-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:newest-cni-20211117123459-2067 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Sc
heduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:36:05.996797   19984 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:36:06.000053   19984 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:36:06.185571   19984 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 20:36:06.131754521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:36:06.185784   19984 start_flags.go:777] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1117 12:36:06.185808   19984 cni.go:93] Creating CNI manager for ""
	I1117 12:36:06.185817   19984 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:36:06.185831   19984 start_flags.go:282] config:
	{Name:newest-cni-20211117123459-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:newest-cni-20211117123459-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:36:06.212874   19984 out.go:176] * Starting control plane node newest-cni-20211117123459-2067 in cluster newest-cni-20211117123459-2067
	I1117 12:36:06.212922   19984 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:36:06.238483   19984 out.go:176] * Pulling base image ...
	I1117 12:36:06.238524   19984 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:36:06.238547   19984 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:36:06.238585   19984 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 12:36:06.238604   19984 cache.go:57] Caching tarball of preloaded images
	I1117 12:36:06.238732   19984 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:36:06.238743   19984 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4-rc.0 on docker
	I1117 12:36:06.239378   19984 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/newest-cni-20211117123459-2067/config.json ...
	I1117 12:36:06.377282   19984 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:36:06.377296   19984 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:36:06.377310   19984 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:36:06.377398   19984 start.go:313] acquiring machines lock for newest-cni-20211117123459-2067: {Name:mk8c536102b388ea9752e9ca8e2ac2f69703a931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:36:06.377566   19984 start.go:317] acquired machines lock for "newest-cni-20211117123459-2067" in 149.359µs
	I1117 12:36:06.377626   19984 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:36:06.377636   19984 fix.go:55] fixHost starting: 
	I1117 12:36:06.377964   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:06.498563   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:06.498651   19984 fix.go:108] recreateIfNeeded on newest-cni-20211117123459-2067: state= err=unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:06.498677   19984 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:36:06.525725   19984 out.go:176] * docker "newest-cni-20211117123459-2067" container is missing, will recreate.
	I1117 12:36:06.525764   19984 delete.go:124] DEMOLISHING newest-cni-20211117123459-2067 ...
	I1117 12:36:06.525920   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:06.653927   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:36:06.653972   19984 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:06.653987   19984 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:06.654434   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:06.773844   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:06.773899   19984 delete.go:82] Unable to get host status for newest-cni-20211117123459-2067, assuming it has already been deleted: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:06.774003   19984 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117123459-2067
	W1117 12:36:06.895274   19984 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:06.895308   19984 kic.go:360] could not find the container newest-cni-20211117123459-2067 to remove it. will try anyways
	I1117 12:36:06.895404   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:07.018290   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:36:07.018332   19984 oci.go:83] error getting container status, will try to delete anyways: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:07.018438   19984 cli_runner.go:115] Run: docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0"
	W1117 12:36:07.137162   19984 cli_runner.go:162] docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:36:07.137188   19984 oci.go:656] error shutdown newest-cni-20211117123459-2067: docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:08.139190   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:08.258187   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:08.258231   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:08.258241   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:08.258273   19984 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:08.810862   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:08.931859   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:08.931903   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:08.931922   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:08.931943   19984 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:10.015371   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:10.156152   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:10.156195   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:10.156204   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:10.156226   19984 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:11.466621   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:11.572354   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:11.572396   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:11.572405   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:11.572425   19984 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:13.163213   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:13.266018   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:13.266067   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:13.266078   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:13.266100   19984 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:15.615791   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:15.721585   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:15.721625   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:15.721641   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:15.721662   19984 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:20.232218   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:20.333653   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:20.333700   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:20.333711   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:20.333737   19984 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:23.562106   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:23.667369   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:23.667407   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:23.667416   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:23.667439   19984 oci.go:87] couldn't shut down newest-cni-20211117123459-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	 
	I1117 12:36:23.667522   19984 cli_runner.go:115] Run: docker rm -f -v newest-cni-20211117123459-2067
	I1117 12:36:23.769012   19984 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117123459-2067
	W1117 12:36:23.872837   19984 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:23.872956   19984 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:36:23.974410   19984 cli_runner.go:115] Run: docker network rm newest-cni-20211117123459-2067
	I1117 12:36:27.627465   19984 cli_runner.go:168] Completed: docker network rm newest-cni-20211117123459-2067: (3.653044128s)
	W1117 12:36:27.628050   19984 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:36:27.628058   19984 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:36:28.633866   19984 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:36:28.661210   19984 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:36:28.661322   19984 start.go:160] libmachine.API.Create for "newest-cni-20211117123459-2067" (driver="docker")
	I1117 12:36:28.661347   19984 client.go:168] LocalClient.Create starting
	I1117 12:36:28.661443   19984 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:36:28.661484   19984 main.go:130] libmachine: Decoding PEM data...
	I1117 12:36:28.661503   19984 main.go:130] libmachine: Parsing certificate...
	I1117 12:36:28.661580   19984 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:36:28.681880   19984 main.go:130] libmachine: Decoding PEM data...
	I1117 12:36:28.681910   19984 main.go:130] libmachine: Parsing certificate...
	I1117 12:36:28.682989   19984 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:36:28.786518   19984 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:36:28.786635   19984 network_create.go:254] running [docker network inspect newest-cni-20211117123459-2067] to gather additional debugging logs...
	I1117 12:36:28.786655   19984 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067
	W1117 12:36:28.886720   19984 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:28.886744   19984 network_create.go:257] error running [docker network inspect newest-cni-20211117123459-2067]: docker network inspect newest-cni-20211117123459-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20211117123459-2067
	I1117 12:36:28.886758   19984 network_create.go:259] output of [docker network inspect newest-cni-20211117123459-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20211117123459-2067
	
	** /stderr **
	I1117 12:36:28.886854   19984 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:36:28.986668   19984 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000de6a0] misses:0}
	I1117 12:36:28.986711   19984 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:36:28.986725   19984 network_create.go:106] attempt to create docker network newest-cni-20211117123459-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:36:28.986800   19984 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067
	W1117 12:36:29.086636   19984 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:36:29.086682   19984 network_create.go:98] failed to create docker network newest-cni-20211117123459-2067 192.168.49.0/24, will retry: subnet is taken
	I1117 12:36:29.086907   19984 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de6a0] amended:false}} dirty:map[] misses:0}
	I1117 12:36:29.086923   19984 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:36:29.087096   19984 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de6a0] amended:true}} dirty:map[192.168.49.0:0xc0000de6a0 192.168.58.0:0xc00000eaf0] misses:0}
	I1117 12:36:29.087108   19984 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:36:29.087114   19984 network_create.go:106] attempt to create docker network newest-cni-20211117123459-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:36:29.087203   19984 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067
	I1117 12:36:34.722599   19984 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067: (5.63539636s)
	I1117 12:36:34.722626   19984 network_create.go:90] docker network newest-cni-20211117123459-2067 192.168.58.0/24 created
	I1117 12:36:34.722652   19984 kic.go:106] calculated static IP "192.168.58.2" for the "newest-cni-20211117123459-2067" container
	I1117 12:36:34.722773   19984 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:36:34.820904   19984 cli_runner.go:115] Run: docker volume create newest-cni-20211117123459-2067 --label name.minikube.sigs.k8s.io=newest-cni-20211117123459-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:36:34.920821   19984 oci.go:102] Successfully created a docker volume newest-cni-20211117123459-2067
	I1117 12:36:34.920946   19984 cli_runner.go:115] Run: docker run --rm --name newest-cni-20211117123459-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117123459-2067 --entrypoint /usr/bin/test -v newest-cni-20211117123459-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:36:35.327937   19984 oci.go:106] Successfully prepared a docker volume newest-cni-20211117123459-2067
	E1117 12:36:35.327987   19984 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:36:35.327995   19984 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:36:35.328012   19984 client.go:171] LocalClient.Create took 6.666718676s
	I1117 12:36:35.328026   19984 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:36:35.328155   19984 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117123459-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:36:37.331298   19984 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:36:37.331388   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:36:37.456634   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:37.456725   19984 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:37.606276   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:36:37.747360   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:37.747443   19984 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:38.050630   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:36:38.170182   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:38.170260   19984 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:38.751989   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:36:38.879981   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:36:38.880093   19984 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:36:38.880107   19984 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:38.880118   19984 start.go:129] duration metric: createHost completed in 10.246322638s
	I1117 12:36:38.880196   19984 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:36:38.880253   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:36:39.001282   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:39.001379   19984 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:39.181232   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:36:39.331875   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:39.331958   19984 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:39.662561   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:36:39.785840   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:39.785937   19984 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:40.254065   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:36:40.370641   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:36:40.370740   19984 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:36:40.370772   19984 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:40.370783   19984 fix.go:57] fixHost completed within 33.993460241s
	I1117 12:36:40.370792   19984 start.go:80] releasing machines lock for "newest-cni-20211117123459-2067", held for 33.993527933s
	W1117 12:36:40.370809   19984 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:36:40.370961   19984 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:36:40.370970   19984 start.go:547] Will try again in 5 seconds ...
	I1117 12:36:41.496694   19984 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117123459-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.168568374s)
	I1117 12:36:41.496714   19984 kic.go:188] duration metric: took 6.168746 seconds to extract preloaded images to volume
	I1117 12:36:45.373383   19984 start.go:313] acquiring machines lock for newest-cni-20211117123459-2067: {Name:mk8c536102b388ea9752e9ca8e2ac2f69703a931 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:36:45.373526   19984 start.go:317] acquired machines lock for "newest-cni-20211117123459-2067" in 107.05µs
	I1117 12:36:45.373554   19984 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:36:45.373560   19984 fix.go:55] fixHost starting: 
	I1117 12:36:45.373874   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:45.489513   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:45.489568   19984 fix.go:108] recreateIfNeeded on newest-cni-20211117123459-2067: state= err=unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:45.489583   19984 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:36:45.515187   19984 out.go:176] * docker "newest-cni-20211117123459-2067" container is missing, will recreate.
	I1117 12:36:45.515199   19984 delete.go:124] DEMOLISHING newest-cni-20211117123459-2067 ...
	I1117 12:36:45.515315   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:45.614793   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:36:45.614834   19984 stop.go:75] unable to get state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:45.614850   19984 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:45.615255   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:45.716803   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:45.716859   19984 delete.go:82] Unable to get host status for newest-cni-20211117123459-2067, assuming it has already been deleted: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:45.716972   19984 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117123459-2067
	W1117 12:36:45.818334   19984 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:36:45.818360   19984 kic.go:360] could not find the container newest-cni-20211117123459-2067 to remove it. will try anyways
	I1117 12:36:45.818455   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:45.920254   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:36:45.920293   19984 oci.go:83] error getting container status, will try to delete anyways: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:45.920387   19984 cli_runner.go:115] Run: docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0"
	W1117 12:36:46.023206   19984 cli_runner.go:162] docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:36:46.023231   19984 oci.go:656] error shutdown newest-cni-20211117123459-2067: docker exec --privileged -t newest-cni-20211117123459-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:47.031213   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:47.225225   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:47.225292   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:47.225320   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:47.225345   19984 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:47.618555   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:47.728572   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:47.728613   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:47.728622   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:47.728646   19984 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:48.331157   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:48.451798   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:48.451850   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:48.451863   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:48.451888   19984 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:49.781171   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:49.914940   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:49.915005   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:49.915019   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:49.915054   19984 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:51.132135   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:51.262171   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:51.262215   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:51.262227   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:51.262249   19984 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:53.042467   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:53.315781   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:53.315820   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:53.315829   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:53.315862   19984 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:56.584741   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:36:56.691596   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:56.691638   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:36:56.691648   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:36:56.691670   19984 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:02.794765   19984 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:37:02.895379   19984 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:02.895421   19984 oci.go:668] temporary error verifying shutdown: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:02.895431   19984 oci.go:670] temporary error: container newest-cni-20211117123459-2067 status is  but expect it to be exited
	I1117 12:37:02.895457   19984 oci.go:87] couldn't shut down newest-cni-20211117123459-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	 
	I1117 12:37:02.895540   19984 cli_runner.go:115] Run: docker rm -f -v newest-cni-20211117123459-2067
	I1117 12:37:02.998175   19984 cli_runner.go:115] Run: docker container inspect -f {{.Id}} newest-cni-20211117123459-2067
	W1117 12:37:03.100090   19984 cli_runner.go:162] docker container inspect -f {{.Id}} newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:03.100206   19984 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:37:03.201340   19984 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:37:03.201446   19984 network_create.go:254] running [docker network inspect newest-cni-20211117123459-2067] to gather additional debugging logs...
	I1117 12:37:03.201463   19984 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067
	W1117 12:37:03.304435   19984 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:03.304467   19984 network_create.go:257] error running [docker network inspect newest-cni-20211117123459-2067]: docker network inspect newest-cni-20211117123459-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20211117123459-2067
	I1117 12:37:03.304486   19984 network_create.go:259] output of [docker network inspect newest-cni-20211117123459-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20211117123459-2067
	
	** /stderr **
	W1117 12:37:03.305629   19984 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:37:03.305636   19984 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:37:04.314414   19984 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:37:04.340222   19984 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:37:04.340396   19984 start.go:160] libmachine.API.Create for "newest-cni-20211117123459-2067" (driver="docker")
	I1117 12:37:04.340434   19984 client.go:168] LocalClient.Create starting
	I1117 12:37:04.340612   19984 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:37:04.340692   19984 main.go:130] libmachine: Decoding PEM data...
	I1117 12:37:04.340717   19984 main.go:130] libmachine: Parsing certificate...
	I1117 12:37:04.340810   19984 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:37:04.340868   19984 main.go:130] libmachine: Decoding PEM data...
	I1117 12:37:04.340892   19984 main.go:130] libmachine: Parsing certificate...
	I1117 12:37:04.362332   19984 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:37:04.489314   19984 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:37:04.489461   19984 network_create.go:254] running [docker network inspect newest-cni-20211117123459-2067] to gather additional debugging logs...
	I1117 12:37:04.489480   19984 cli_runner.go:115] Run: docker network inspect newest-cni-20211117123459-2067
	W1117 12:37:04.601886   19984 cli_runner.go:162] docker network inspect newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:04.601914   19984 network_create.go:257] error running [docker network inspect newest-cni-20211117123459-2067]: docker network inspect newest-cni-20211117123459-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20211117123459-2067
	I1117 12:37:04.601928   19984 network_create.go:259] output of [docker network inspect newest-cni-20211117123459-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20211117123459-2067
	
	** /stderr **
	I1117 12:37:04.602011   19984 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:37:04.715020   19984 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de6a0] amended:true}} dirty:map[192.168.49.0:0xc0000de6a0 192.168.58.0:0xc00000eaf0] misses:0}
	I1117 12:37:04.715056   19984 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:37:04.715261   19984 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de6a0] amended:true}} dirty:map[192.168.49.0:0xc0000de6a0 192.168.58.0:0xc00000eaf0] misses:1}
	I1117 12:37:04.715274   19984 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:37:04.715450   19984 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000de6a0] amended:true}} dirty:map[192.168.49.0:0xc0000de6a0 192.168.58.0:0xc00000eaf0 192.168.67.0:0xc000656888] misses:1}
	I1117 12:37:04.715461   19984 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:37:04.715469   19984 network_create.go:106] attempt to create docker network newest-cni-20211117123459-2067 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 12:37:04.715541   19984 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067
	I1117 12:37:11.212580   19984 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20211117123459-2067: (6.497052469s)
	I1117 12:37:11.212605   19984 network_create.go:90] docker network newest-cni-20211117123459-2067 192.168.67.0/24 created
	I1117 12:37:11.212628   19984 kic.go:106] calculated static IP "192.168.67.2" for the "newest-cni-20211117123459-2067" container
	I1117 12:37:11.212754   19984 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:37:11.312386   19984 cli_runner.go:115] Run: docker volume create newest-cni-20211117123459-2067 --label name.minikube.sigs.k8s.io=newest-cni-20211117123459-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:37:11.413661   19984 oci.go:102] Successfully created a docker volume newest-cni-20211117123459-2067
	I1117 12:37:11.413784   19984 cli_runner.go:115] Run: docker run --rm --name newest-cni-20211117123459-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20211117123459-2067 --entrypoint /usr/bin/test -v newest-cni-20211117123459-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:37:11.824318   19984 oci.go:106] Successfully prepared a docker volume newest-cni-20211117123459-2067
	E1117 12:37:11.824370   19984 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:37:11.824379   19984 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 12:37:11.824380   19984 client.go:171] LocalClient.Create took 7.4840075s
	I1117 12:37:11.824396   19984 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:37:11.824514   19984 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20211117123459-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:37:13.831018   19984 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:37:13.831107   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:37:13.946546   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:13.946630   19984 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:14.147890   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:37:14.309156   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:14.309242   19984 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:14.614431   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:37:14.726581   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:14.726671   19984 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:15.431456   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:37:15.583914   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:37:15.584004   19984 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:37:15.584022   19984 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:15.584033   19984 start.go:129] duration metric: createHost completed in 11.269683524s
	I1117 12:37:15.585254   19984 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:37:15.585328   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:37:15.704265   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:15.704384   19984 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:16.047271   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:37:16.173101   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:16.173237   19984 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:16.631138   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:37:16.754718   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	I1117 12:37:16.754810   19984 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:17.335881   19984 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067
	W1117 12:37:17.445756   19984 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067 returned with exit code 1
	W1117 12:37:17.445851   19984 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:37:17.445869   19984 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-20211117123459-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211117123459-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	I1117 12:37:17.445881   19984 fix.go:57] fixHost completed within 32.072613502s
	I1117 12:37:17.445889   19984 start.go:80] releasing machines lock for "newest-cni-20211117123459-2067", held for 32.072645426s
	W1117 12:37:17.446044   19984 out.go:241] * Failed to start docker container. Running "minikube delete -p newest-cni-20211117123459-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p newest-cni-20211117123459-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:37:17.490382   19984 out.go:176] 
	W1117 12:37:17.490546   19984 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:37:17.490560   19984 out.go:241] * 
	* 
	W1117 12:37:17.491206   19984 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:37:17.595103   19984 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p newest-cni-20211117123459-2067 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117123459-2067
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117123459-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117123459-2067",
	        "Id": "9d2b4bde1ba492adf4ad0cc49ac4b7a78745e163da9611208972465dee4cd04e",
	        "Created": "2021-11-17T20:37:04.833890946Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067: exit status 7 (165.96727ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:37:17.895664   20657 status.go:247] status error: host: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117123459-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (72.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20211117123427-2067" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "15d6150b6e89d93baafe84011e652df83d5affb8a4012fa2403722c812077d7a",
	        "Created": "2021-11-17T20:36:41.720799007Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (147.352109ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:53.933801   20398 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20211117123427-2067" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211117123427-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117123427-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (39.036231ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117123427-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20211117123427-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "15d6150b6e89d93baafe84011e652df83d5affb8a4012fa2403722c812077d7a",
	        "Created": "2021-11-17T20:36:41.720799007Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (144.98908ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:54.224462   20408 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117123427-2067 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117123427-2067 "sudo crictl images -o json": exit status 80 (243.128056ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117123427-2067 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.22.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.3",
- 	"k8s.gcr.io/kube-proxy:v1.22.3",
- 	"k8s.gcr.io/kube-scheduler:v1.22.3",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "15d6150b6e89d93baafe84011e652df83d5affb8a4012fa2403722c812077d7a",
	        "Created": "2021-11-17T20:36:41.720799007Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (148.745571ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:54.723200   20422 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117123427-2067 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117123427-2067 --alsologtostderr -v=1: exit status 80 (203.275506ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:36:54.763617   20427 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:36:54.764177   20427 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:36:54.764183   20427 out.go:310] Setting ErrFile to fd 2...
	I1117 12:36:54.764187   20427 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:36:54.764273   20427 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:36:54.764455   20427 out.go:304] Setting JSON to false
	I1117 12:36:54.764472   20427 mustload.go:65] Loading cluster: default-k8s-different-port-20211117123427-2067
	I1117 12:36:54.764708   20427 config.go:176] Loaded profile config "default-k8s-different-port-20211117123427-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:36:54.765063   20427 cli_runner.go:115] Run: docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}
	W1117 12:36:54.868313   20427 cli_runner.go:162] docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:36:54.895825   20427 out.go:176] 
	W1117 12:36:54.896037   20427 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067
	
	W1117 12:36:54.896055   20427 out.go:241] * 
	* 
	W1117 12:36:54.900507   20427 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:36:54.926022   20427 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117123427-2067 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "15d6150b6e89d93baafe84011e652df83d5affb8a4012fa2403722c812077d7a",
	        "Created": "2021-11-17T20:36:41.720799007Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (149.751125ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:55.181663   20436 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117123427-2067
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20211117123427-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-different-port-20211117123427-2067",
	        "Id": "15d6150b6e89d93baafe84011e652df83d5affb8a4012fa2403722c812077d7a",
	        "Created": "2021-11-17T20:36:41.720799007Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117123427-2067 -n default-k8s-different-port-20211117123427-2067: exit status 7 (145.643556ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:36:55.432718   20445 status.go:247] status error: host: state: unknown state "default-k8s-different-port-20211117123427-2067": docker container inspect default-k8s-different-port-20211117123427-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-different-port-20211117123427-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117123427-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (0.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20211117123704-2067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-20211117123704-2067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3: exit status 80 (53.424089045s)

                                                
                                                
-- stdout --
	* [embed-certs-20211117123704-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node embed-certs-20211117123704-2067 in cluster embed-certs-20211117123704-2067
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20211117123704-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:37:04.968424   20560 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:37:04.968562   20560 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:37:04.968567   20560 out.go:310] Setting ErrFile to fd 2...
	I1117 12:37:04.968570   20560 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:37:04.968645   20560 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:37:04.968979   20560 out.go:304] Setting JSON to false
	I1117 12:37:04.996370   20560 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":3999,"bootTime":1637177425,"procs":324,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:37:04.996466   20560 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:37:05.022854   20560 out.go:176] * [embed-certs-20211117123704-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:37:05.023045   20560 notify.go:174] Checking for updates...
	I1117 12:37:05.070473   20560 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:37:05.097690   20560 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:37:05.123547   20560 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:37:05.149368   20560 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:37:05.149807   20560 config.go:176] Loaded profile config "multinode-20211117120800-2067-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:37:05.149903   20560 config.go:176] Loaded profile config "newest-cni-20211117123459-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:37:05.149938   20560 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:37:05.240933   20560 docker.go:132] docker version: linux-20.10.5
	I1117 12:37:05.241056   20560 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:37:05.396245   20560 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:49 SystemTime:2021-11-17 20:37:05.353084225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:37:05.443260   20560 out.go:176] * Using the docker driver based on user configuration
	I1117 12:37:05.443287   20560 start.go:280] selected driver: docker
	I1117 12:37:05.443296   20560 start.go:775] validating driver "docker" against <nil>
	I1117 12:37:05.443309   20560 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:37:05.445692   20560 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:37:05.599543   20560 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:49 SystemTime:2021-11-17 20:37:05.556723316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:37:05.599664   20560 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 12:37:05.599788   20560 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:37:05.599805   20560 cni.go:93] Creating CNI manager for ""
	I1117 12:37:05.599812   20560 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:37:05.599818   20560 start_flags.go:282] config:
	{Name:embed-certs-20211117123704-2067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:embed-certs-20211117123704-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:37:05.647366   20560 out.go:176] * Starting control plane node embed-certs-20211117123704-2067 in cluster embed-certs-20211117123704-2067
	I1117 12:37:05.647473   20560 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:37:05.673243   20560 out.go:176] * Pulling base image ...
	I1117 12:37:05.673355   20560 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:37:05.673374   20560 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:37:05.673444   20560 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:37:05.673472   20560 cache.go:57] Caching tarball of preloaded images
	I1117 12:37:05.673707   20560 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:37:05.673723   20560 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:37:05.674805   20560 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/embed-certs-20211117123704-2067/config.json ...
	I1117 12:37:05.674936   20560 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/embed-certs-20211117123704-2067/config.json: {Name:mkdbf1075733ac47c6b6456b495eec3e920fb63a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 12:37:05.790212   20560 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:37:05.790228   20560 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:37:05.790240   20560 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:37:05.790279   20560 start.go:313] acquiring machines lock for embed-certs-20211117123704-2067: {Name:mk8346b67e44e2a1d0260fdae772a9126f083f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:37:05.790417   20560 start.go:317] acquired machines lock for "embed-certs-20211117123704-2067" in 125.711µs
	I1117 12:37:05.790444   20560 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20211117123704-2067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:embed-certs-20211117123704-2067 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 12:37:05.790528   20560 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:37:05.838672   20560 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:37:05.839067   20560 start.go:160] libmachine.API.Create for "embed-certs-20211117123704-2067" (driver="docker")
	I1117 12:37:05.839119   20560 client.go:168] LocalClient.Create starting
	I1117 12:37:05.839281   20560 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:37:05.839369   20560 main.go:130] libmachine: Decoding PEM data...
	I1117 12:37:05.839400   20560 main.go:130] libmachine: Parsing certificate...
	I1117 12:37:05.839502   20560 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:37:05.839560   20560 main.go:130] libmachine: Decoding PEM data...
	I1117 12:37:05.839584   20560 main.go:130] libmachine: Parsing certificate...
	I1117 12:37:05.840375   20560 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:37:05.944706   20560 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:37:05.944829   20560 network_create.go:254] running [docker network inspect embed-certs-20211117123704-2067] to gather additional debugging logs...
	I1117 12:37:05.944876   20560 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067
	W1117 12:37:06.045327   20560 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:06.045357   20560 network_create.go:257] error running [docker network inspect embed-certs-20211117123704-2067]: docker network inspect embed-certs-20211117123704-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211117123704-2067
	I1117 12:37:06.045369   20560 network_create.go:259] output of [docker network inspect embed-certs-20211117123704-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211117123704-2067
	
	** /stderr **
	I1117 12:37:06.045476   20560 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:37:06.148307   20560 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000818720] misses:0}
	I1117 12:37:06.148344   20560 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:37:06.148363   20560 network_create.go:106] attempt to create docker network embed-certs-20211117123704-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:37:06.148452   20560 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117123704-2067
	I1117 12:37:18.900173   20560 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117123704-2067: (12.751790482s)
	I1117 12:37:18.900198   20560 network_create.go:90] docker network embed-certs-20211117123704-2067 192.168.49.0/24 created
	I1117 12:37:18.900218   20560 kic.go:106] calculated static IP "192.168.49.2" for the "embed-certs-20211117123704-2067" container
	I1117 12:37:18.900328   20560 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:37:19.013945   20560 cli_runner.go:115] Run: docker volume create embed-certs-20211117123704-2067 --label name.minikube.sigs.k8s.io=embed-certs-20211117123704-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:37:19.128902   20560 oci.go:102] Successfully created a docker volume embed-certs-20211117123704-2067
	I1117 12:37:19.129070   20560 cli_runner.go:115] Run: docker run --rm --name embed-certs-20211117123704-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117123704-2067 --entrypoint /usr/bin/test -v embed-certs-20211117123704-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:37:19.655158   20560 oci.go:106] Successfully prepared a docker volume embed-certs-20211117123704-2067
	E1117 12:37:19.655280   20560 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:37:19.655317   20560 client.go:171] LocalClient.Create took 13.816311437s
	I1117 12:37:19.655369   20560 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:37:19.655387   20560 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:37:19.655546   20560 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117123704-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:37:21.665335   20560 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:37:21.665432   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:21.783538   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:21.783626   20560 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:22.064338   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:22.201544   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:22.201641   20560 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:22.742129   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:22.861568   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:22.861652   20560 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:23.520418   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:23.638855   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	W1117 12:37:23.638947   20560 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:37:23.638964   20560 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:23.638974   20560 start.go:129] duration metric: createHost completed in 17.84860314s
	I1117 12:37:23.638981   20560 start.go:80] releasing machines lock for "embed-certs-20211117123704-2067", held for 17.848718762s
	W1117 12:37:23.638998   20560 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:37:23.639575   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:23.757723   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:23.757775   20560 delete.go:82] Unable to get host status for embed-certs-20211117123704-2067, assuming it has already been deleted: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	W1117 12:37:23.757907   20560 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:37:23.757920   20560 start.go:547] Will try again in 5 seconds ...
	I1117 12:37:25.904878   20560 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117123704-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.249304078s)
	I1117 12:37:25.904897   20560 kic.go:188] duration metric: took 6.249570 seconds to extract preloaded images to volume
	I1117 12:37:28.764946   20560 start.go:313] acquiring machines lock for embed-certs-20211117123704-2067: {Name:mk8346b67e44e2a1d0260fdae772a9126f083f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:37:28.765114   20560 start.go:317] acquired machines lock for "embed-certs-20211117123704-2067" in 140.011µs
	I1117 12:37:28.765151   20560 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:37:28.765162   20560 fix.go:55] fixHost starting: 
	I1117 12:37:28.765485   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:28.867657   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:28.867699   20560 fix.go:108] recreateIfNeeded on embed-certs-20211117123704-2067: state= err=unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:28.867716   20560 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:37:28.896437   20560 out.go:176] * docker "embed-certs-20211117123704-2067" container is missing, will recreate.
	I1117 12:37:28.896450   20560 delete.go:124] DEMOLISHING embed-certs-20211117123704-2067 ...
	I1117 12:37:28.896643   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:28.999001   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:37:28.999062   20560 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:28.999077   20560 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:28.999487   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:29.102649   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:29.102694   20560 delete.go:82] Unable to get host status for embed-certs-20211117123704-2067, assuming it has already been deleted: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:29.102782   20560 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117123704-2067
	W1117 12:37:29.204564   20560 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:29.204593   20560 kic.go:360] could not find the container embed-certs-20211117123704-2067 to remove it. will try anyways
	I1117 12:37:29.204670   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:29.333257   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:37:29.333314   20560 oci.go:83] error getting container status, will try to delete anyways: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:29.333409   20560 cli_runner.go:115] Run: docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0"
	W1117 12:37:29.437126   20560 cli_runner.go:162] docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:37:29.437158   20560 oci.go:656] error shutdown embed-certs-20211117123704-2067: docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:30.443649   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:30.558678   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:30.558728   20560 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:30.558741   20560 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:37:30.558764   20560 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:31.030967   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:31.134352   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:31.134395   20560 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:31.134405   20560 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:37:31.134427   20560 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:32.030967   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:32.135469   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:32.135511   20560 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:32.135522   20560 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:37:32.135545   20560 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:32.780920   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:32.886303   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:32.886352   20560 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:32.886360   20560 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:37:32.886386   20560 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:34.000188   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:34.101523   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:34.101562   20560 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:34.101570   20560 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:37:34.101602   20560 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:35.614419   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:35.717850   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:35.717891   20560 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:35.717899   20560 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:37:35.717920   20560 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:38.764422   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:38.867533   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:38.867572   20560 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:38.867581   20560 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:37:38.867603   20560 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:44.655896   20560 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:44.758896   20560 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:44.758949   20560 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:44.758959   20560 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:37:44.758989   20560 oci.go:87] couldn't shut down embed-certs-20211117123704-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	 
	I1117 12:37:44.759079   20560 cli_runner.go:115] Run: docker rm -f -v embed-certs-20211117123704-2067
	I1117 12:37:44.858694   20560 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117123704-2067
	W1117 12:37:44.968220   20560 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:44.968324   20560 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:37:45.068601   20560 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:37:45.068709   20560 network_create.go:254] running [docker network inspect embed-certs-20211117123704-2067] to gather additional debugging logs...
	I1117 12:37:45.068727   20560 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067
	W1117 12:37:45.166901   20560 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:45.166934   20560 network_create.go:257] error running [docker network inspect embed-certs-20211117123704-2067]: docker network inspect embed-certs-20211117123704-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211117123704-2067
	I1117 12:37:45.166947   20560 network_create.go:259] output of [docker network inspect embed-certs-20211117123704-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211117123704-2067
	
	** /stderr **
	W1117 12:37:45.167219   20560 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:37:45.167226   20560 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:37:46.167432   20560 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:37:46.194785   20560 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:37:46.194961   20560 start.go:160] libmachine.API.Create for "embed-certs-20211117123704-2067" (driver="docker")
	I1117 12:37:46.195015   20560 client.go:168] LocalClient.Create starting
	I1117 12:37:46.195196   20560 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:37:46.195273   20560 main.go:130] libmachine: Decoding PEM data...
	I1117 12:37:46.195296   20560 main.go:130] libmachine: Parsing certificate...
	I1117 12:37:46.195383   20560 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:37:46.195435   20560 main.go:130] libmachine: Decoding PEM data...
	I1117 12:37:46.195452   20560 main.go:130] libmachine: Parsing certificate...
	I1117 12:37:46.196304   20560 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:37:46.297076   20560 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:37:46.297180   20560 network_create.go:254] running [docker network inspect embed-certs-20211117123704-2067] to gather additional debugging logs...
	I1117 12:37:46.297198   20560 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067
	W1117 12:37:46.396650   20560 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:46.396684   20560 network_create.go:257] error running [docker network inspect embed-certs-20211117123704-2067]: docker network inspect embed-certs-20211117123704-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211117123704-2067
	I1117 12:37:46.396696   20560 network_create.go:259] output of [docker network inspect embed-certs-20211117123704-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211117123704-2067
	
	** /stderr **
	I1117 12:37:46.396782   20560 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:37:46.498132   20560 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000818720] amended:false}} dirty:map[] misses:0}
	I1117 12:37:46.498166   20560 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:37:46.498325   20560 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000818720] amended:true}} dirty:map[192.168.49.0:0xc000818720 192.168.58.0:0xc00052c170] misses:0}
	I1117 12:37:46.498343   20560 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:37:46.498349   20560 network_create.go:106] attempt to create docker network embed-certs-20211117123704-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:37:46.498425   20560 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117123704-2067
	I1117 12:37:51.383890   20560 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117123704-2067: (4.885442469s)
	I1117 12:37:51.383919   20560 network_create.go:90] docker network embed-certs-20211117123704-2067 192.168.58.0/24 created
	I1117 12:37:51.383941   20560 kic.go:106] calculated static IP "192.168.58.2" for the "embed-certs-20211117123704-2067" container
	I1117 12:37:51.384056   20560 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:37:51.481887   20560 cli_runner.go:115] Run: docker volume create embed-certs-20211117123704-2067 --label name.minikube.sigs.k8s.io=embed-certs-20211117123704-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:37:51.580246   20560 oci.go:102] Successfully created a docker volume embed-certs-20211117123704-2067
	I1117 12:37:51.580394   20560 cli_runner.go:115] Run: docker run --rm --name embed-certs-20211117123704-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117123704-2067 --entrypoint /usr/bin/test -v embed-certs-20211117123704-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:37:51.962039   20560 oci.go:106] Successfully prepared a docker volume embed-certs-20211117123704-2067
	E1117 12:37:51.962091   20560 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:37:51.962103   20560 client.go:171] LocalClient.Create took 5.767134488s
	I1117 12:37:51.962114   20560 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:37:51.962134   20560 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:37:51.962278   20560 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117123704-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:37:53.966250   20560 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:37:53.966379   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:54.101320   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:54.101422   20560 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:54.280640   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:54.401561   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:54.401644   20560 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:54.733061   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:54.848534   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:54.848623   20560 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:55.310608   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:55.425836   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	W1117 12:37:55.425919   20560 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:37:55.425944   20560 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:55.425953   20560 start.go:129] duration metric: createHost completed in 9.25858201s
	I1117 12:37:55.426039   20560 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:37:55.426103   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:55.542974   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:55.543060   20560 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:55.748825   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:55.880764   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:55.880849   20560 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:56.182291   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:56.306792   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:56.306918   20560 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:56.970657   20560 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:37:58.109114   20560 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:37:58.109134   20560 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: (1.138417467s)
	W1117 12:37:58.109213   20560 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:37:58.109226   20560 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:58.109235   20560 fix.go:57] fixHost completed within 29.344340947s
	I1117 12:37:58.109243   20560 start.go:80] releasing machines lock for "embed-certs-20211117123704-2067", held for 29.344385701s
	W1117 12:37:58.109393   20560 out.go:241] * Failed to start docker container. Running "minikube delete -p embed-certs-20211117123704-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p embed-certs-20211117123704-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:37:58.187399   20560 out.go:176] 
	W1117 12:37:58.187557   20560 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:37:58.187586   20560 out.go:241] * 
	* 
	W1117 12:37:58.188394   20560 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:37:58.312284   20560 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p embed-certs-20211117123704-2067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "71cfb5117baeab037d06b7e4cd2b2fd459cb8c3168d608a0ca0f967edc108fbd",
	        "Created": "2021-11-17T20:37:46.593314374Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (148.856104ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:37:58.632868   20948 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (53.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20211117123459-2067 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p newest-cni-20211117123459-2067 "sudo crictl images -o json": exit status 80 (325.449466ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p newest-cni-20211117123459-2067 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117123459-2067
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117123459-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117123459-2067",
	        "Id": "9d2b4bde1ba492adf4ad0cc49ac4b7a78745e163da9611208972465dee4cd04e",
	        "Created": "2021-11-17T20:37:04.833890946Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067: exit status 7 (144.273188ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:37:18.471558   20671 status.go:247] status error: host: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117123459-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20211117123459-2067 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p newest-cni-20211117123459-2067 --alsologtostderr -v=1: exit status 80 (201.406757ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:37:18.513650   20676 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:37:18.513866   20676 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:37:18.513871   20676 out.go:310] Setting ErrFile to fd 2...
	I1117 12:37:18.513874   20676 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:37:18.513953   20676 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:37:18.514116   20676 out.go:304] Setting JSON to false
	I1117 12:37:18.514132   20676 mustload.go:65] Loading cluster: newest-cni-20211117123459-2067
	I1117 12:37:18.514352   20676 config.go:176] Loaded profile config "newest-cni-20211117123459-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4-rc.0
	I1117 12:37:18.514685   20676 cli_runner.go:115] Run: docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}
	W1117 12:37:18.617638   20676 cli_runner.go:162] docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:37:18.643907   20676 out.go:176] 
	W1117 12:37:18.644053   20676 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067
	
	W1117 12:37:18.644066   20676 out.go:241] * 
	* 
	W1117 12:37:18.646463   20676 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:37:18.672817   20676 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p newest-cni-20211117123459-2067 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117123459-2067
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117123459-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117123459-2067",
	        "Id": "9d2b4bde1ba492adf4ad0cc49ac4b7a78745e163da9611208972465dee4cd04e",
	        "Created": "2021-11-17T20:37:04.833890946Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067: exit status 7 (151.155461ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:37:18.929937   20685 status.go:247] status error: host: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117123459-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117123459-2067
helpers_test.go:235: (dbg) docker inspect newest-cni-20211117123459-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-20211117123459-2067",
	        "Id": "9d2b4bde1ba492adf4ad0cc49ac4b7a78745e163da9611208972465dee4cd04e",
	        "Created": "2021-11-17T20:37:04.833890946Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117123459-2067 -n newest-cni-20211117123459-2067: exit status 7 (160.05406ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:37:19.203318   20700 status.go:247] status error: host: state: unknown state "newest-cni-20211117123459-2067": docker container inspect newest-cni-20211117123459-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-20211117123459-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-20211117123459-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211117123704-2067 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context embed-certs-20211117123704-2067 create -f testdata/busybox.yaml: exit status 1 (40.98943ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117123704-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context embed-certs-20211117123704-2067 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "71cfb5117baeab037d06b7e4cd2b2fd459cb8c3168d608a0ca0f967edc108fbd",
	        "Created": "2021-11-17T20:37:46.593314374Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (146.479752ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:37:58.927411   20958 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "71cfb5117baeab037d06b7e4cd2b2fd459cb8c3168d608a0ca0f967edc108fbd",
	        "Created": "2021-11-17T20:37:46.593314374Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (146.982235ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:37:59.178964   20967 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20211117123704-2067 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20211117123704-2067 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context embed-certs-20211117123704-2067 describe deploy/metrics-server -n kube-system: exit status 1 (39.291477ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117123704-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20211117123704-2067 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "71cfb5117baeab037d06b7e4cd2b2fd459cb8c3168d608a0ca0f967edc108fbd",
	        "Created": "2021-11-17T20:37:46.593314374Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (144.216822ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:37:59.683904   20982 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20211117123704-2067 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p embed-certs-20211117123704-2067 --alsologtostderr -v=3: exit status 82 (14.746396702s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-20211117123704-2067"  ...
	* Stopping node "embed-certs-20211117123704-2067"  ...
	* Stopping node "embed-certs-20211117123704-2067"  ...
	* Stopping node "embed-certs-20211117123704-2067"  ...
	* Stopping node "embed-certs-20211117123704-2067"  ...
	* Stopping node "embed-certs-20211117123704-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:37:59.725225   20987 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:37:59.725838   20987 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:37:59.725844   20987 out.go:310] Setting ErrFile to fd 2...
	I1117 12:37:59.725847   20987 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:37:59.725921   20987 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:37:59.726086   20987 out.go:304] Setting JSON to false
	I1117 12:37:59.726237   20987 mustload.go:65] Loading cluster: embed-certs-20211117123704-2067
	I1117 12:37:59.726468   20987 config.go:176] Loaded profile config "embed-certs-20211117123704-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:37:59.726545   20987 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/embed-certs-20211117123704-2067/config.json ...
	I1117 12:37:59.726872   20987 mustload.go:65] Loading cluster: embed-certs-20211117123704-2067
	I1117 12:37:59.726958   20987 config.go:176] Loaded profile config "embed-certs-20211117123704-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:37:59.726990   20987 stop.go:39] StopHost: embed-certs-20211117123704-2067
	I1117 12:37:59.753748   20987 out.go:176] * Stopping node "embed-certs-20211117123704-2067"  ...
	I1117 12:37:59.753991   20987 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:37:59.855622   20987 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:37:59.855690   20987 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	W1117 12:37:59.855710   20987 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:37:59.855731   20987 retry.go:31] will retry after 1.104660288s: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:00.964574   20987 stop.go:39] StopHost: embed-certs-20211117123704-2067
	I1117 12:38:00.992041   20987 out.go:176] * Stopping node "embed-certs-20211117123704-2067"  ...
	I1117 12:38:00.992270   20987 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:01.096292   20987 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:01.096331   20987 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	W1117 12:38:01.096341   20987 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:01.096359   20987 retry.go:31] will retry after 2.160763633s: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:03.257578   20987 stop.go:39] StopHost: embed-certs-20211117123704-2067
	I1117 12:38:03.285093   20987 out.go:176] * Stopping node "embed-certs-20211117123704-2067"  ...
	I1117 12:38:03.285421   20987 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:03.389119   20987 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:03.389164   20987 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	W1117 12:38:03.389175   20987 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:03.389194   20987 retry.go:31] will retry after 2.62026012s: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:06.010071   20987 stop.go:39] StopHost: embed-certs-20211117123704-2067
	I1117 12:38:06.058197   20987 out.go:176] * Stopping node "embed-certs-20211117123704-2067"  ...
	I1117 12:38:06.058487   20987 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:06.163763   20987 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:06.163806   20987 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	W1117 12:38:06.163821   20987 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:06.163841   20987 retry.go:31] will retry after 3.164785382s: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:09.335610   20987 stop.go:39] StopHost: embed-certs-20211117123704-2067
	I1117 12:38:09.362858   20987 out.go:176] * Stopping node "embed-certs-20211117123704-2067"  ...
	I1117 12:38:09.363123   20987 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:09.468431   20987 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:09.468478   20987 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	W1117 12:38:09.468491   20987 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:09.468513   20987 retry.go:31] will retry after 4.680977329s: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:14.157182   20987 stop.go:39] StopHost: embed-certs-20211117123704-2067
	I1117 12:38:14.184410   20987 out.go:176] * Stopping node "embed-certs-20211117123704-2067"  ...
	I1117 12:38:14.184604   20987 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:14.301550   20987 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:14.301586   20987 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	W1117 12:38:14.301595   20987 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:14.327334   20987 out.go:176] 
	W1117 12:38:14.327641   20987 out.go:241] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20211117123704-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-20211117123704-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:38:14.327660   20987 out.go:241] * 
	* 
	W1117 12:38:14.336068   20987 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:38:14.410326   20987 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p embed-certs-20211117123704-2067 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "71cfb5117baeab037d06b7e4cd2b2fd459cb8c3168d608a0ca0f967edc108fbd",
	        "Created": "2021-11-17T20:37:46.593314374Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (142.886378ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:38:14.677067   21016 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (14.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (142.63987ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:38:14.819899   21021 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20211117123704-2067 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "71cfb5117baeab037d06b7e4cd2b2fd459cb8c3168d608a0ca0f967edc108fbd",
	        "Created": "2021-11-17T20:37:46.593314374Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (144.430422ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:38:15.294926   21035 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (72.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20211117123704-2067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-20211117123704-2067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3: exit status 80 (1m11.988393068s)

                                                
                                                
-- stdout --
	* [embed-certs-20211117123704-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20211117123704-2067 in cluster embed-certs-20211117123704-2067
	* Pulling base image ...
	* docker "embed-certs-20211117123704-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-20211117123704-2067" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:38:15.338907   21040 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:38:15.339049   21040 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:38:15.339054   21040 out.go:310] Setting ErrFile to fd 2...
	I1117 12:38:15.339057   21040 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:38:15.339138   21040 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:38:15.339415   21040 out.go:304] Setting JSON to false
	I1117 12:38:15.366850   21040 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":4070,"bootTime":1637177425,"procs":320,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 12:38:15.366953   21040 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 12:38:15.393696   21040 out.go:176] * [embed-certs-20211117123704-2067] minikube v1.24.0 on Darwin 11.1
	I1117 12:38:15.393971   21040 notify.go:174] Checking for updates...
	I1117 12:38:15.441313   21040 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 12:38:15.472052   21040 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 12:38:15.498119   21040 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 12:38:15.524169   21040 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 12:38:15.524516   21040 config.go:176] Loaded profile config "embed-certs-20211117123704-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:38:15.524838   21040 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 12:38:15.613352   21040 docker.go:132] docker version: linux-20.10.5
	I1117 12:38:15.613475   21040 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:38:15.765303   21040 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:38:15.731399273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:38:15.791980   21040 out.go:176] * Using the docker driver based on existing profile
	I1117 12:38:15.792097   21040 start.go:280] selected driver: docker
	I1117 12:38:15.792109   21040 start.go:775] validating driver "docker" against &{Name:embed-certs-20211117123704-2067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:embed-certs-20211117123704-2067 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host}
	I1117 12:38:15.792214   21040 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 12:38:15.795876   21040 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 12:38:15.944951   21040 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 20:38:15.912897265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 12:38:15.945101   21040 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 12:38:15.945125   21040 cni.go:93] Creating CNI manager for ""
	I1117 12:38:15.945132   21040 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 12:38:15.945141   21040 start_flags.go:282] config:
	{Name:embed-certs-20211117123704-2067 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:embed-certs-20211117123704-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 12:38:15.971938   21040 out.go:176] * Starting control plane node embed-certs-20211117123704-2067 in cluster embed-certs-20211117123704-2067
	I1117 12:38:15.972030   21040 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 12:38:16.040715   21040 out.go:176] * Pulling base image ...
	I1117 12:38:16.040788   21040 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:38:16.040867   21040 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 12:38:16.040877   21040 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 12:38:16.040899   21040 cache.go:57] Caching tarball of preloaded images
	I1117 12:38:16.041194   21040 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 12:38:16.041222   21040 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 12:38:16.042538   21040 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/embed-certs-20211117123704-2067/config.json ...
	I1117 12:38:16.154867   21040 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 12:38:16.154879   21040 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 12:38:16.154891   21040 cache.go:206] Successfully downloaded all kic artifacts
	I1117 12:38:16.154927   21040 start.go:313] acquiring machines lock for embed-certs-20211117123704-2067: {Name:mk8346b67e44e2a1d0260fdae772a9126f083f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:38:16.155006   21040 start.go:317] acquired machines lock for "embed-certs-20211117123704-2067" in 59.307µs
	I1117 12:38:16.155027   21040 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:38:16.155036   21040 fix.go:55] fixHost starting: 
	I1117 12:38:16.155280   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:16.255534   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:16.255610   21040 fix.go:108] recreateIfNeeded on embed-certs-20211117123704-2067: state= err=unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:16.255635   21040 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:38:16.282589   21040 out.go:176] * docker "embed-certs-20211117123704-2067" container is missing, will recreate.
	I1117 12:38:16.282612   21040 delete.go:124] DEMOLISHING embed-certs-20211117123704-2067 ...
	I1117 12:38:16.282749   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:16.385157   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:16.385215   21040 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:16.385236   21040 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:16.385670   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:16.485332   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:16.485374   21040 delete.go:82] Unable to get host status for embed-certs-20211117123704-2067, assuming it has already been deleted: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:16.485465   21040 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117123704-2067
	W1117 12:38:16.586418   21040 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:16.586449   21040 kic.go:360] could not find the container embed-certs-20211117123704-2067 to remove it. will try anyways
	I1117 12:38:16.586550   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:16.687669   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:16.687719   21040 oci.go:83] error getting container status, will try to delete anyways: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:16.687800   21040 cli_runner.go:115] Run: docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0"
	W1117 12:38:16.789885   21040 cli_runner.go:162] docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:38:16.789913   21040 oci.go:656] error shutdown embed-certs-20211117123704-2067: docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:17.800283   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:17.904359   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:17.904401   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:17.904410   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:17.904438   21040 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:18.458506   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:18.562885   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:18.562925   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:18.562942   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:18.562963   21040 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:19.646415   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:19.751953   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:19.751991   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:19.751998   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:19.752019   21040 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:21.062415   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:21.166968   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:21.167008   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:21.167017   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:21.167038   21040 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:22.759802   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:22.863184   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:22.863234   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:22.863244   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:22.863266   21040 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:25.204707   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:25.308607   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:25.330569   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:25.330588   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:25.330633   21040 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:29.837100   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:29.938779   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:29.938827   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:29.938835   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:29.938873   21040 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:33.160683   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:33.266193   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:33.266232   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:33.266242   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:33.266267   21040 oci.go:87] couldn't shut down embed-certs-20211117123704-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	 
	I1117 12:38:33.266342   21040 cli_runner.go:115] Run: docker rm -f -v embed-certs-20211117123704-2067
	I1117 12:38:33.366731   21040 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117123704-2067
	W1117 12:38:33.467407   21040 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:33.467548   21040 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:38:33.566301   21040 cli_runner.go:115] Run: docker network rm embed-certs-20211117123704-2067
	I1117 12:38:37.026020   21040 cli_runner.go:168] Completed: docker network rm embed-certs-20211117123704-2067: (3.45970663s)
	W1117 12:38:37.026732   21040 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:38:37.026739   21040 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:38:38.026992   21040 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:38:38.054535   21040 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:38:38.054701   21040 start.go:160] libmachine.API.Create for "embed-certs-20211117123704-2067" (driver="docker")
	I1117 12:38:38.054746   21040 client.go:168] LocalClient.Create starting
	I1117 12:38:38.054972   21040 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:38:38.055068   21040 main.go:130] libmachine: Decoding PEM data...
	I1117 12:38:38.055098   21040 main.go:130] libmachine: Parsing certificate...
	I1117 12:38:38.055244   21040 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:38:38.055299   21040 main.go:130] libmachine: Decoding PEM data...
	I1117 12:38:38.055328   21040 main.go:130] libmachine: Parsing certificate...
	I1117 12:38:38.056262   21040 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:38:38.160046   21040 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:38:38.160154   21040 network_create.go:254] running [docker network inspect embed-certs-20211117123704-2067] to gather additional debugging logs...
	I1117 12:38:38.160170   21040 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067
	W1117 12:38:38.259863   21040 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:38.259892   21040 network_create.go:257] error running [docker network inspect embed-certs-20211117123704-2067]: docker network inspect embed-certs-20211117123704-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211117123704-2067
	I1117 12:38:38.259909   21040 network_create.go:259] output of [docker network inspect embed-certs-20211117123704-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211117123704-2067
	
	** /stderr **
	I1117 12:38:38.260009   21040 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:38:38.362087   21040 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001bcb30] misses:0}
	I1117 12:38:38.362126   21040 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:38:38.362140   21040 network_create.go:106] attempt to create docker network embed-certs-20211117123704-2067 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 12:38:38.362213   21040 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117123704-2067
	I1117 12:38:43.095441   21040 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117123704-2067: (4.733213626s)
	I1117 12:38:43.095465   21040 network_create.go:90] docker network embed-certs-20211117123704-2067 192.168.49.0/24 created
	I1117 12:38:43.095480   21040 kic.go:106] calculated static IP "192.168.49.2" for the "embed-certs-20211117123704-2067" container
	I1117 12:38:43.095601   21040 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:38:43.196704   21040 cli_runner.go:115] Run: docker volume create embed-certs-20211117123704-2067 --label name.minikube.sigs.k8s.io=embed-certs-20211117123704-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:38:43.297458   21040 oci.go:102] Successfully created a docker volume embed-certs-20211117123704-2067
	I1117 12:38:43.297596   21040 cli_runner.go:115] Run: docker run --rm --name embed-certs-20211117123704-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117123704-2067 --entrypoint /usr/bin/test -v embed-certs-20211117123704-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:38:43.700657   21040 oci.go:106] Successfully prepared a docker volume embed-certs-20211117123704-2067
	E1117 12:38:43.700724   21040 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:38:43.700729   21040 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:38:43.700743   21040 client.go:171] LocalClient.Create took 5.646038645s
	I1117 12:38:43.700757   21040 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:38:43.700864   21040 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117123704-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:38:45.701124   21040 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:38:45.701234   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:38:45.842037   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:45.842177   21040 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:45.991636   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:38:46.115286   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:46.115378   21040 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:46.418913   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:38:46.537922   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:46.538071   21040 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:47.113106   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:38:47.234425   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	W1117 12:38:47.234534   21040 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:38:47.234560   21040 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:47.234584   21040 start.go:129] duration metric: createHost completed in 9.207631386s
	I1117 12:38:47.234676   21040 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:38:47.234763   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:38:47.346970   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:47.347045   21040 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:47.530092   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:38:47.654558   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:47.654654   21040 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:47.993523   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:38:48.118813   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:48.118893   21040 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:48.581597   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:38:48.703109   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	W1117 12:38:48.703192   21040 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:38:48.703216   21040 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:48.703232   21040 fix.go:57] fixHost completed within 32.548489113s
	I1117 12:38:48.703241   21040 start.go:80] releasing machines lock for "embed-certs-20211117123704-2067", held for 32.548524188s
	W1117 12:38:48.703258   21040 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:38:48.703388   21040 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:38:48.703396   21040 start.go:547] Will try again in 5 seconds ...
	I1117 12:38:49.979781   21040 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117123704-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.2789265s)
	I1117 12:38:49.979799   21040 kic.go:188] duration metric: took 6.279100 seconds to extract preloaded images to volume
	I1117 12:38:53.705573   21040 start.go:313] acquiring machines lock for embed-certs-20211117123704-2067: {Name:mk8346b67e44e2a1d0260fdae772a9126f083f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 12:38:53.705750   21040 start.go:317] acquired machines lock for "embed-certs-20211117123704-2067" in 143.868µs
	I1117 12:38:53.705802   21040 start.go:93] Skipping create...Using existing machine configuration
	I1117 12:38:53.705811   21040 fix.go:55] fixHost starting: 
	I1117 12:38:53.706297   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:53.809785   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:53.809834   21040 fix.go:108] recreateIfNeeded on embed-certs-20211117123704-2067: state= err=unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:53.809847   21040 fix.go:113] machineExists: false. err=machine does not exist
	I1117 12:38:53.836823   21040 out.go:176] * docker "embed-certs-20211117123704-2067" container is missing, will recreate.
	I1117 12:38:53.836854   21040 delete.go:124] DEMOLISHING embed-certs-20211117123704-2067 ...
	I1117 12:38:53.837149   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:53.939492   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:53.939538   21040 stop.go:75] unable to get state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:53.939559   21040 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:53.939969   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:54.041375   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:54.041416   21040 delete.go:82] Unable to get host status for embed-certs-20211117123704-2067, assuming it has already been deleted: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:54.041511   21040 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117123704-2067
	W1117 12:38:54.144725   21040 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:38:54.144751   21040 kic.go:360] could not find the container embed-certs-20211117123704-2067 to remove it. will try anyways
	I1117 12:38:54.144842   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:54.244857   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	W1117 12:38:54.244906   21040 oci.go:83] error getting container status, will try to delete anyways: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:54.245007   21040 cli_runner.go:115] Run: docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0"
	W1117 12:38:54.347673   21040 cli_runner.go:162] docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 12:38:54.347701   21040 oci.go:656] error shutdown embed-certs-20211117123704-2067: docker exec --privileged -t embed-certs-20211117123704-2067 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:55.351813   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:55.456826   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:55.456874   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:55.456895   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:55.456921   21040 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:55.850520   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:55.953984   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:55.954034   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:55.954043   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:55.954070   21040 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:56.550554   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:56.654990   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:56.655037   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:56.655048   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:56.655083   21040 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:57.981996   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:58.084946   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:58.084996   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:58.085008   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:58.085029   21040 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:59.306788   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:38:59.410110   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:38:59.410159   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:38:59.410186   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:38:59.410214   21040 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:01.192865   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:39:01.293685   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:39:01.293730   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:01.293748   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:39:01.293770   21040 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:04.572401   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:39:04.675469   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:39:04.675514   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:04.675526   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:39:04.675547   21040 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:10.780144   21040 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:39:10.884312   21040 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:39:10.884355   21040 oci.go:668] temporary error verifying shutdown: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:10.884364   21040 oci.go:670] temporary error: container embed-certs-20211117123704-2067 status is  but expect it to be exited
	I1117 12:39:10.884390   21040 oci.go:87] couldn't shut down embed-certs-20211117123704-2067 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	 
	I1117 12:39:10.884472   21040 cli_runner.go:115] Run: docker rm -f -v embed-certs-20211117123704-2067
	I1117 12:39:10.987334   21040 cli_runner.go:115] Run: docker container inspect -f {{.Id}} embed-certs-20211117123704-2067
	W1117 12:39:11.086246   21040 cli_runner.go:162] docker container inspect -f {{.Id}} embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:39:11.086363   21040 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:39:11.187311   21040 cli_runner.go:115] Run: docker network rm embed-certs-20211117123704-2067
	I1117 12:39:14.640929   21040 cli_runner.go:168] Completed: docker network rm embed-certs-20211117123704-2067: (3.453597774s)
	W1117 12:39:14.641220   21040 delete.go:139] delete failed (probably ok) <nil>
	I1117 12:39:14.641226   21040 fix.go:120] Sleeping 1 second for extra luck!
	I1117 12:39:15.645729   21040 start.go:126] createHost starting for "" (driver="docker")
	I1117 12:39:15.672818   21040 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 12:39:15.673008   21040 start.go:160] libmachine.API.Create for "embed-certs-20211117123704-2067" (driver="docker")
	I1117 12:39:15.673046   21040 client.go:168] LocalClient.Create starting
	I1117 12:39:15.673255   21040 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/ca.pem
	I1117 12:39:15.673340   21040 main.go:130] libmachine: Decoding PEM data...
	I1117 12:39:15.673366   21040 main.go:130] libmachine: Parsing certificate...
	I1117 12:39:15.673460   21040 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/certs/cert.pem
	I1117 12:39:15.673515   21040 main.go:130] libmachine: Decoding PEM data...
	I1117 12:39:15.673539   21040 main.go:130] libmachine: Parsing certificate...
	I1117 12:39:15.674474   21040 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 12:39:15.779705   21040 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 12:39:15.779806   21040 network_create.go:254] running [docker network inspect embed-certs-20211117123704-2067] to gather additional debugging logs...
	I1117 12:39:15.779824   21040 cli_runner.go:115] Run: docker network inspect embed-certs-20211117123704-2067
	W1117 12:39:15.879712   21040 cli_runner.go:162] docker network inspect embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:39:15.879736   21040 network_create.go:257] error running [docker network inspect embed-certs-20211117123704-2067]: docker network inspect embed-certs-20211117123704-2067: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211117123704-2067
	I1117 12:39:15.879748   21040 network_create.go:259] output of [docker network inspect embed-certs-20211117123704-2067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211117123704-2067
	
	** /stderr **
	I1117 12:39:15.879842   21040 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 12:39:15.981931   21040 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001bcb30] amended:false}} dirty:map[] misses:0}
	I1117 12:39:15.981961   21040 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:39:15.982139   21040 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001bcb30] amended:true}} dirty:map[192.168.49.0:0xc0001bcb30 192.168.58.0:0xc0001bc9b8] misses:0}
	I1117 12:39:15.982151   21040 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 12:39:15.982157   21040 network_create.go:106] attempt to create docker network embed-certs-20211117123704-2067 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 12:39:15.982236   21040 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117123704-2067
	I1117 12:39:20.838069   21040 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211117123704-2067: (4.855817028s)
	I1117 12:39:20.838098   21040 network_create.go:90] docker network embed-certs-20211117123704-2067 192.168.58.0/24 created
	I1117 12:39:20.838125   21040 kic.go:106] calculated static IP "192.168.58.2" for the "embed-certs-20211117123704-2067" container
	I1117 12:39:20.838232   21040 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 12:39:20.938143   21040 cli_runner.go:115] Run: docker volume create embed-certs-20211117123704-2067 --label name.minikube.sigs.k8s.io=embed-certs-20211117123704-2067 --label created_by.minikube.sigs.k8s.io=true
	I1117 12:39:21.037243   21040 oci.go:102] Successfully created a docker volume embed-certs-20211117123704-2067
	I1117 12:39:21.037376   21040 cli_runner.go:115] Run: docker run --rm --name embed-certs-20211117123704-2067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211117123704-2067 --entrypoint /usr/bin/test -v embed-certs-20211117123704-2067:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 12:39:21.430939   21040 oci.go:106] Successfully prepared a docker volume embed-certs-20211117123704-2067
	E1117 12:39:21.430989   21040 oci.go:173] error getting kernel modules path: Unable to locate kernel modules
	I1117 12:39:21.430990   21040 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 12:39:21.431000   21040 client.go:171] LocalClient.Create took 5.757998833s
	I1117 12:39:21.431014   21040 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 12:39:21.431116   21040 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211117123704-2067:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 12:39:23.437040   21040 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:39:23.437155   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:39:23.573925   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:39:23.574047   21040 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:23.779349   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:39:23.901742   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:39:23.901828   21040 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:24.201016   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:39:24.341303   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:39:24.341392   21040 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:25.049525   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:39:25.177937   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	W1117 12:39:25.178024   21040 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:39:25.178047   21040 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:25.178059   21040 start.go:129] duration metric: createHost completed in 9.532394903s
	I1117 12:39:25.178132   21040 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 12:39:25.178194   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:39:25.301746   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:39:25.301922   21040 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:25.648634   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:39:25.789241   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:39:25.789352   21040 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:26.245508   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:39:26.365737   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	I1117 12:39:26.365815   21040 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:26.946591   21040 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067
	W1117 12:39:27.047868   21040 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067 returned with exit code 1
	W1117 12:39:27.047949   21040 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:39:27.047967   21040 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-20211117123704-2067": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211117123704-2067: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	I1117 12:39:27.047975   21040 fix.go:57] fixHost completed within 33.34246881s
	I1117 12:39:27.047983   21040 start.go:80] releasing machines lock for "embed-certs-20211117123704-2067", held for 33.342522255s
	W1117 12:39:27.048134   21040 out.go:241] * Failed to start docker container. Running "minikube delete -p embed-certs-20211117123704-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p embed-certs-20211117123704-2067" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 12:39:27.161718   21040 out.go:176] 
	W1117 12:39:27.161891   21040 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 12:39:27.161908   21040 out.go:241] * 
	* 
	W1117 12:39:27.163024   21040 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:39:27.246614   21040 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p embed-certs-20211117123704-2067 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "f74f9063d9741fbac5a6c013017d8bc95ff279343aecba6195767dfc57e3476d",
	        "Created": "2021-11-17T20:39:16.099982694Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (148.394986ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:39:27.553362   21350 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (72.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20211117123704-2067" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "f74f9063d9741fbac5a6c013017d8bc95ff279343aecba6195767dfc57e3476d",
	        "Created": "2021-11-17T20:39:16.099982694Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (145.909501ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:39:27.804489   21359 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20211117123704-2067" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211117123704-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211117123704-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (41.446793ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117123704-2067" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20211117123704-2067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "f74f9063d9741fbac5a6c013017d8bc95ff279343aecba6195767dfc57e3476d",
	        "Created": "2021-11-17T20:39:16.099982694Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (148.036764ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:39:28.099144   21369 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20211117123704-2067 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p embed-certs-20211117123704-2067 "sudo crictl images -o json": exit status 80 (207.083265ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p embed-certs-20211117123704-2067 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:289: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:289: v1.22.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.3",
- 	"k8s.gcr.io/kube-proxy:v1.22.3",
- 	"k8s.gcr.io/kube-scheduler:v1.22.3",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "f74f9063d9741fbac5a6c013017d8bc95ff279343aecba6195767dfc57e3476d",
	        "Created": "2021-11-17T20:39:16.099982694Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (146.592733ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:39:28.559145   21383 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20211117123704-2067 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p embed-certs-20211117123704-2067 --alsologtostderr -v=1: exit status 80 (207.235683ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 12:39:28.600227   21388 out.go:297] Setting OutFile to fd 1 ...
	I1117 12:39:28.600782   21388 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:39:28.600787   21388 out.go:310] Setting ErrFile to fd 2...
	I1117 12:39:28.600790   21388 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 12:39:28.600874   21388 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 12:39:28.601050   21388 out.go:304] Setting JSON to false
	I1117 12:39:28.601065   21388 mustload.go:65] Loading cluster: embed-certs-20211117123704-2067
	I1117 12:39:28.601300   21388 config.go:176] Loaded profile config "embed-certs-20211117123704-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 12:39:28.601645   21388 cli_runner.go:115] Run: docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}
	W1117 12:39:28.709515   21388 cli_runner.go:162] docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}} returned with exit code 1
	I1117 12:39:28.736671   21388 out.go:176] 
	W1117 12:39:28.736755   21388 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067
	
	W1117 12:39:28.736763   21388 out.go:241] * 
	* 
	W1117 12:39:28.739827   21388 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 12:39:28.766660   21388 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p embed-certs-20211117123704-2067 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "f74f9063d9741fbac5a6c013017d8bc95ff279343aecba6195767dfc57e3476d",
	        "Created": "2021-11-17T20:39:16.099982694Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (145.697679ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:39:29.017742   21397 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117123704-2067
helpers_test.go:235: (dbg) docker inspect embed-certs-20211117123704-2067:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-20211117123704-2067",
	        "Id": "f74f9063d9741fbac5a6c013017d8bc95ff279343aecba6195767dfc57e3476d",
	        "Created": "2021-11-17T20:39:16.099982694Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117123704-2067 -n embed-certs-20211117123704-2067: exit status 7 (158.728432ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 12:39:29.286688   21406 status.go:247] status error: host: state: unknown state "embed-certs-20211117123704-2067": docker container inspect embed-certs-20211117123704-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-20211117123704-2067

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-20211117123704-2067" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.73s)

                                                
                                    

Test pass (63/236)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 20.48
7 TestDownloadOnly/v1.14.0/kubectl 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.28
10 TestDownloadOnly/v1.22.3/json-events 4.81
11 TestDownloadOnly/v1.22.3/preload-exists 0
14 TestDownloadOnly/v1.22.3/kubectl 0
15 TestDownloadOnly/v1.22.3/LogsDuration 0.28
17 TestDownloadOnly/v1.22.4-rc.0/json-events 4.78
18 TestDownloadOnly/v1.22.4-rc.0/preload-exists 0
21 TestDownloadOnly/v1.22.4-rc.0/kubectl 0
22 TestDownloadOnly/v1.22.4-rc.0/LogsDuration 0.27
23 TestDownloadOnly/DeleteAll 1.03
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.59
25 TestDownloadOnlyKic 8.78
35 TestHyperKitDriverInstallOrUpdate 6.05
39 TestErrorSpam/start 2.36
40 TestErrorSpam/status 0.42
41 TestErrorSpam/pause 0.58
42 TestErrorSpam/unpause 0.76
43 TestErrorSpam/stop 44.04
46 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/CacheCmd/cache/add_local 1.68
68 TestFunctional/parallel/ConfigCmd 0.49
70 TestFunctional/parallel/DryRun 1.26
71 TestFunctional/parallel/InternationalLanguage 0.59
76 TestFunctional/parallel/AddonsCmd 0.27
91 TestFunctional/parallel/Version/short 0.09
95 TestFunctional/parallel/ImageCommands/Setup 2.06
101 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
102 TestFunctional/parallel/ProfileCmd/profile_list 0.36
103 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
105 TestFunctional/parallel/ImageCommands/ImageRemove 0.38
108 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
119 TestFunctional/delete_addon-resizer_images 0.2
120 TestFunctional/delete_my-image_image 0.1
121 TestFunctional/delete_minikube_cached_images 0.1
127 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.21
140 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
146 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
154 TestErrorJSONOutput 0.7
157 TestKicCustomNetwork/use_default_bridge_network 78.5
158 TestKicExistingNetwork 85.07
159 TestMainNoArgs 0.07
166 TestMountStart/serial/DeleteFirst 6.99
195 TestRunningBinaryUpgrade 114.87
210 TestStoppedBinaryUpgrade/Setup 0.63
211 TestStoppedBinaryUpgrade/Upgrade 129.86
212 TestStoppedBinaryUpgrade/MinikubeLogs 2.76
225 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
226 TestNoKubernetes/serial/ProfileList 0.98
233 TestPause/serial/DeletePaused 8.67
234 TestPause/serial/VerifyDeletedResources 0.73
235 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
236 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.45
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.87
278 TestStartStop/group/newest-cni/serial/DeployApp 0
279 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.32
289 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
290 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.14.0/json-events (20.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117115004-2067 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117115004-2067 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker : (20.478873601s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (20.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117115004-2067
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117115004-2067: exit status 85 (276.729719ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 11:50:04
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 11:50:04.326657    2077 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:50:04.326795    2077 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:50:04.326800    2077 out.go:310] Setting ErrFile to fd 2...
	I1117 11:50:04.326803    2077 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:50:04.326883    2077 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	W1117 11:50:04.326972    2077 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/config/config.json: no such file or directory
	I1117 11:50:04.327425    2077 out.go:304] Setting JSON to true
	I1117 11:50:04.355478    2077 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1179,"bootTime":1637177425,"procs":324,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 11:50:04.355592    2077 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 11:50:04.385207    2077 notify.go:174] Checking for updates...
	W1117 11:50:04.385218    2077 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/preloaded-tarball: no such file or directory
	I1117 11:50:04.410854    2077 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 11:50:04.496788    2077 docker.go:108] docker version returned error: exit status 1
	I1117 11:50:04.523461    2077 start.go:280] selected driver: docker
	I1117 11:50:04.523480    2077 start.go:775] validating driver "docker" against <nil>
	I1117 11:50:04.523643    2077 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:50:04.665387    2077 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:50:04.718319    2077 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:50:04.856070    2077 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:50:04.883235    2077 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 11:50:04.938810    2077 start_flags.go:349] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1117 11:50:04.938922    2077 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 11:50:04.938940    2077 cni.go:93] Creating CNI manager for ""
	I1117 11:50:04.938947    2077 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 11:50:04.938955    2077 start_flags.go:282] config:
	{Name:download-only-20211117115004-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211117115004-2067 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:50:04.964810    2077 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 11:50:04.990627    2077 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 11:50:04.990624    2077 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 11:50:04.990848    2077 cache.go:107] acquiring lock: {Name:mkd127a3c25f93bb0bb67399f435813c6972ca6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.990848    2077 cache.go:107] acquiring lock: {Name:mk484f4aa10be29d59ecef162cc3ba4ef356bc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.990908    2077 cache.go:107] acquiring lock: {Name:mk7b527433b29f0dd0563715c0f984bcd4089bb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.990920    2077 cache.go:107] acquiring lock: {Name:mkb2849c13a9c4cb3f5fa192fb2a574e06a810de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.991944    2077 cache.go:107] acquiring lock: {Name:mkc38557d3f08ef749cdb79439f2e56bd72f6169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.992066    2077 cache.go:107] acquiring lock: {Name:mk76aaf2f8656a00ba5f71599ab085b0b776a24a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.992080    2077 cache.go:107] acquiring lock: {Name:mk45b980248ef596bcdcc9984c2292cf07ef6457 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.992256    2077 cache.go:107] acquiring lock: {Name:mk8b303a5d15a81fc9edc8267d40dfa9f5a412b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.992190    2077 cache.go:107] acquiring lock: {Name:mk8510e8d29ffb1d7afc63ac2448ba0a514946b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.992298    2077 cache.go:107] acquiring lock: {Name:mk049836d4c7a5aed7f940eae8dd62aca34ea643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 11:50:04.992954    2077 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/download-only-20211117115004-2067/config.json ...
	I1117 11:50:04.992982    2077 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I1117 11:50:04.993047    2077 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.14.0
	I1117 11:50:04.993055    2077 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.14.0
	I1117 11:50:04.993060    2077 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/profiles/download-only-20211117115004-2067/config.json: {Name:mkdebbaadb426199b94dccc8bc36187d9cb57f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 11:50:04.993077    2077 image.go:134] retrieving image: k8s.gcr.io/coredns:1.3.1
	I1117 11:50:04.993103    2077 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.10
	I1117 11:50:04.993191    2077 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I1117 11:50:04.993234    2077 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.14.0
	I1117 11:50:04.993314    2077 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.14.0
	I1117 11:50:04.993399    2077 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I1117 11:50:04.993451    2077 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 11:50:04.993498    2077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 11:50:04.993837    2077 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/linux/v1.14.0/kubectl
	I1117 11:50:04.993837    2077 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/linux/v1.14.0/kubeadm
	I1117 11:50:04.993846    2077 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/linux/v1.14.0/kubelet
	I1117 11:50:04.993861    2077 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.14.0 original:k8s.gcr.io/kube-controller-manager:v1.14.0} opener:0xc000222000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.993887    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0
	I1117 11:50:04.994103    2077 image.go:176] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:k8s-minikube/storage-provisioner} tag:v5 original:gcr.io/k8s-minikube/storage-provisioner:v5} opener:0xc0004a4380 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.994120    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I1117 11:50:04.994224    2077 image.go:176] found index.docker.io/kubernetesui/dashboard:v2.3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:index.docker.io} repository:kubernetesui/dashboard} tag:v2.3.1 original:docker.io/kubernetesui/dashboard:v2.3.1} opener:0xc0001ca0e0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.994243    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
	I1117 11:50:04.994357    2077 image.go:176] found k8s.gcr.io/kube-proxy:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.14.0 original:k8s.gcr.io/kube-proxy:v1.14.0} opener:0xc000d9dd50 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.994371    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0
	I1117 11:50:04.994466    2077 image.go:176] found k8s.gcr.io/kube-apiserver:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.14.0 original:k8s.gcr.io/kube-apiserver:v1.14.0} opener:0xc000222310 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.994484    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0
	I1117 11:50:04.995643    2077 image.go:176] found k8s.gcr.io/pause:3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:pause} tag:3.1 original:k8s.gcr.io/pause:3.1} opener:0xc0004a4460 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.995660    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I1117 11:50:04.995717    2077 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 3.732993ms
	I1117 11:50:04.995841    2077 image.go:176] found index.docker.io/kubernetesui/metrics-scraper:v1.0.7 locally: &{ref:{Repository:{Registry:{insecure:false registry:index.docker.io} repository:kubernetesui/metrics-scraper} tag:v1.0.7 original:docker.io/kubernetesui/metrics-scraper:v1.0.7} opener:0xc000628230 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.995867    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I1117 11:50:04.995890    2077 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0" took 4.180393ms
	I1117 11:50:04.995919    2077 image.go:176] found k8s.gcr.io/kube-scheduler:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.14.0 original:k8s.gcr.io/kube-scheduler:v1.14.0} opener:0xc000222540 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.995940    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0
	I1117 11:50:04.995980    2077 image.go:176] found k8s.gcr.io/coredns:1.3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:coredns} tag:1.3.1 original:k8s.gcr.io/coredns:1.3.1} opener:0xc000d9dea0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.995996    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1
	I1117 11:50:04.996028    2077 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0" took 5.192253ms
	I1117 11:50:04.996153    2077 image.go:176] found k8s.gcr.io/etcd:3.3.10 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:etcd} tag:3.3.10 original:k8s.gcr.io/etcd:3.3.10} opener:0xc0004a4540 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 11:50:04.996168    2077 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10
	I1117 11:50:04.996206    2077 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 5.314123ms
	I1117 11:50:04.996538    2077 cache.go:96] cache image "k8s.gcr.io/coredns:1.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1" took 4.64813ms
	I1117 11:50:04.996591    2077 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0" took 5.728411ms
	I1117 11:50:04.996650    2077 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 4.701879ms
	I1117 11:50:04.996691    2077 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 5.860518ms
	I1117 11:50:04.996824    2077 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0" took 4.783279ms
	I1117 11:50:04.996839    2077 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.10" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10" took 5.971469ms
	I1117 11:50:05.087351    2077 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 11:50:05.087513    2077 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 11:50:05.087602    2077 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 11:50:05.941966    2077 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/darwin/v1.14.0/kubectl
	E1117 11:50:06.423725    2077 cache.go:215] Error caching images:  Caching images for kubeadm: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1": write: unable to calculate manifest: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117115004-2067"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/json-events (4.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117115004-2067 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117115004-2067 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=docker --driver=docker : (4.808353425s)
--- PASS: TestDownloadOnly/v1.22.3/json-events (4.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/preload-exists
--- PASS: TestDownloadOnly/v1.22.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/kubectl
--- PASS: TestDownloadOnly/v1.22.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117115004-2067
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117115004-2067: exit status 85 (275.253856ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 11:50:31
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117115004-2067"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.3/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/json-events (4.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117115004-2067 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117115004-2067 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=docker --driver=docker : (4.779799653s)
--- PASS: TestDownloadOnly/v1.22.4-rc.0/json-events (4.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.4-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.22.4-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117115004-2067
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117115004-2067: exit status 85 (272.825084ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 11:50:36
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117115004-2067"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.03s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:189: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.032980662s)
--- PASS: TestDownloadOnly/DeleteAll (1.03s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.59s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20211117115004-2067
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.59s)

                                                
                                    
x
+
TestDownloadOnlyKic (8.78s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20211117115043-2067 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:226: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20211117115043-2067 --force --alsologtostderr --driver=docker : (7.292534832s)
helpers_test.go:175: Cleaning up "download-docker-20211117115043-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20211117115043-2067
--- PASS: TestDownloadOnlyKic (8.78s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.05s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* minikube v1.24.0 on darwin
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.11.0-to-current375953366
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.11.0-to-current375953366/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.11.0-to-current375953366/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.11.0-to-current375953366/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperKitDriverInstallOrUpdate (6.05s)

                                                
                                    
x
+
TestErrorSpam/start (2.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 start --dry-run
--- PASS: TestErrorSpam/start (2.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status: exit status 7 (137.919706ms)

                                                
                                                
-- stdout --
	nospam-20211117115142-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:52:30.059131    2898 status.go:258] status error: host: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	E1117 11:52:30.059139    2898 status.go:261] The "nospam-20211117115142-2067" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status" failed: exit status 7
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status: exit status 7 (139.926288ms)

                                                
                                                
-- stdout --
	nospam-20211117115142-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:52:30.199172    2903 status.go:258] status error: host: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	E1117 11:52:30.199179    2903 status.go:261] The "nospam-20211117115142-2067" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status" failed: exit status 7
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status: exit status 7 (142.231371ms)

                                                
                                                
-- stdout --
	nospam-20211117115142-2067
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 11:52:30.341803    2908 status.go:258] status error: host: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	E1117 11:52:30.341810    2908 status.go:261] The "nospam-20211117115142-2067" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.42s)

                                                
                                    
x
+
TestErrorSpam/pause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause: exit status 80 (192.334367ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause" failed: exit status 80
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause: exit status 80 (192.475121ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause" failed: exit status 80
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause: exit status 80 (193.211887ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (0.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause: exit status 80 (245.969545ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause" failed: exit status 80
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause: exit status 80 (264.707618ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause" failed: exit status 80
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause: exit status 80 (249.892665ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117115142-2067": docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (0.76s)

                                                
                                    
x
+
TestErrorSpam/stop (44.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop: exit status 82 (14.713909382s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop" failed: exit status 82
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop: exit status 82 (14.649096127s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop" failed: exit status 82
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop: exit status 82 (14.677967233s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	* Stopping node "nospam-20211117115142-2067"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117115142-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117115142-2067
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-darwin-amd64 -p nospam-20211117115142-2067 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20211117115142-2067 stop" failed: exit status 82
--- PASS: TestErrorSpam/stop (44.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1633: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/files/etc/test/nested/copy/2067/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1014: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20211117115319-2067 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/functional-20211117115319-20671854232447
functional_test.go:1026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add minikube-local-cache-test:functional-20211117115319-2067
functional_test.go:1026: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache add minikube-local-cache-test:functional-20211117115319-2067: (1.063098562s)
functional_test.go:1031: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 cache delete minikube-local-cache-test:functional-20211117115319-2067
functional_test.go:1020: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20211117115319-2067
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 config get cpus
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 config get cpus: exit status 14 (47.51324ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 config set cpus 2
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 config unset cpus
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 config get cpus: exit status 14 (61.252363ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:912: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:912: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (569.110701ms)

                                                
                                                
-- stdout --
	* [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:57:04.764969    4435 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:57:04.765091    4435 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:57:04.765096    4435 out.go:310] Setting ErrFile to fd 2...
	I1117 11:57:04.765099    4435 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:57:04.765167    4435 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:57:04.765403    4435 out.go:304] Setting JSON to false
	I1117 11:57:04.788925    4435 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1599,"bootTime":1637177425,"procs":318,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 11:57:04.789021    4435 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 11:57:04.816083    4435 out.go:176] * [functional-20211117115319-2067] minikube v1.24.0 on Darwin 11.1
	I1117 11:57:04.862790    4435 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 11:57:04.888800    4435 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 11:57:04.914654    4435 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 11:57:04.940801    4435 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 11:57:04.941460    4435 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 11:57:04.942767    4435 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 11:57:05.028394    4435 docker.go:132] docker version: linux-20.10.5
	I1117 11:57:05.028541    4435 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:57:05.174283    4435 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 19:57:05.13316405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:57:05.222835    4435 out.go:176] * Using the docker driver based on existing profile
	I1117 11:57:05.222943    4435 start.go:280] selected driver: docker
	I1117 11:57:05.222955    4435 start.go:775] validating driver "docker" against &{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:57:05.223070    4435 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 11:57:05.247982    4435 out.go:176] 
	W1117 11:57:05.248263    4435 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1117 11:57:05.273891    4435 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:929: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:954: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117115319-2067 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (588.284301ms)

                                                
                                                
-- stdout --
	* [functional-20211117115319-2067] minikube v1.24.0 sur Darwin 11.1
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 11:56:37.576468    4297 out.go:297] Setting OutFile to fd 1 ...
	I1117 11:56:37.576598    4297 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:56:37.576603    4297 out.go:310] Setting ErrFile to fd 2...
	I1117 11:56:37.576606    4297 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 11:56:37.576717    4297 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube/bin
	I1117 11:56:37.576970    4297 out.go:304] Setting JSON to false
	I1117 11:56:37.600721    4297 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":1572,"bootTime":1637177425,"procs":317,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1117 11:56:37.600822    4297 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 11:56:37.650661    4297 out.go:176] * [functional-20211117115319-2067] minikube v1.24.0 sur Darwin 11.1
	I1117 11:56:37.676568    4297 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 11:56:37.702403    4297 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig
	I1117 11:56:37.728347    4297 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 11:56:37.754564    4297 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube
	I1117 11:56:37.755225    4297 config.go:176] Loaded profile config "functional-20211117115319-2067": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 11:56:37.755842    4297 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 11:56:37.847344    4297 docker.go:132] docker version: linux-20.10.5
	I1117 11:56:37.847488    4297 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 11:56:37.997742    4297 info.go:263] docker info: {ID:O4L5:FEGT:JIID:EORR:XXSY:TL4H:Z4QO:B57Z:YUBU:SYLY:CFE3:7ISX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:21 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 19:56:37.957995463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://in
dex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I1117 11:56:38.046549    4297 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I1117 11:56:38.046591    4297 start.go:280] selected driver: docker
	I1117 11:56:38.046605    4297 start.go:775] validating driver "docker" against &{Name:functional-20211117115319-2067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117115319-2067 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 11:56:38.046733    4297 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 11:56:38.075487    4297 out.go:176] 
	W1117 11:56:38.075713    4297 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1117 11:56:38.102435    4297 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1482: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 addons list
functional_test.go:1494: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2037: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.947621261s)
functional_test.go:303: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20211117115319-2067
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1213: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1218: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1253: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1258: Took "291.905633ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1267: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1272: Took "69.052324ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1304: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1309: Took "346.590936ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1317: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1322: Took "97.976747ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:333: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image rm gcr.io/google-containers/addon-resizer:functional-20211117115319-2067

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20211117115319-2067 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:360: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20211117115319-2067
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211117115319-2067
functional_test.go:370: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20211117115319-2067
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20211117115319-2067 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20211117115319-2067
--- PASS: TestFunctional/delete_addon-resizer_images (0.20s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:192: (dbg) Run:  docker rmi -f localhost/my-image:functional-20211117115319-2067
--- PASS: TestFunctional/delete_my-image_image (0.10s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:200: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20211117115319-2067
--- PASS: TestFunctional/delete_minikube_cached_images (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117115836-2067 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.21s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.7s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20211117120034-2067 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20211117120034-2067 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (120.974212ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"16a77e7a-00e7-4972-ae06-c2efb7265d68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20211117120034-2067] minikube v1.24.0 on Darwin 11.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd70679b-6997-44d0-888a-4dac83580912","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"d9595414-3540-4038-a8ca-63d5f5c92e56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/kubeconfig"}}
	{"specversion":"1.0","id":"02197e7b-6c4f-4ffc-9eaa-9d2820f3ed78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"07243fbe-8652-45c4-86cf-86569f39aa6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-896-41d04d1976fcad0b0b824d850ee7b8db3632a01b/.minikube"}}
	{"specversion":"1.0","id":"f47db8fb-1650-4244-a715-5cffcc039136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20211117120034-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20211117120034-2067
--- PASS: TestErrorJSONOutput (0.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (78.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20211117120210-2067 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20211117120210-2067 --network=bridge: (1m13.088675603s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20211117120210-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20211117120210-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20211117120210-2067: (5.297488468s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (78.50s)

                                                
                                    
x
+
TestKicExistingNetwork (85.07s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20211117120333-2067 --network=existing-network
kic_custom_network_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20211117120333-2067 --network=existing-network: (1m15.338116123s)
helpers_test.go:175: Cleaning up "existing-network-20211117120333-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20211117120333-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20211117120333-2067: (5.291431831s)
--- PASS: TestKicExistingNetwork (85.07s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (6.99s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20211117120454-2067 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20211117120454-2067 --alsologtostderr -v=5: (6.985383765s)
--- PASS: TestMountStart/serial/DeleteFirst (6.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (114.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2684531750.exe start -p running-upgrade-20211117121853-2067 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2684531750.exe start -p running-upgrade-20211117121853-2067 --memory=2200 --vm-driver=docker : (1m1.532557671s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20211117121853-2067 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-20211117121853-2067 --memory=2200 --alsologtostderr -v=1 --driver=docker : (42.360508124s)
helpers_test.go:175: Cleaning up "running-upgrade-20211117121853-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20211117121853-2067
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20211117121853-2067: (10.562845577s)
--- PASS: TestRunningBinaryUpgrade (114.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (129.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.743038489.exe start -p stopped-upgrade-20211117121757-2067 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.743038489.exe start -p stopped-upgrade-20211117121757-2067 --memory=2200 --vm-driver=docker : (1m21.779936177s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.743038489.exe -p stopped-upgrade-20211117121757-2067 stop
version_upgrade_test.go:199: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.743038489.exe -p stopped-upgrade-20211117121757-2067 stop: (3.217204824s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-20211117121757-2067 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-20211117121757-2067 --memory=2200 --alsologtostderr -v=1 --driver=docker : (44.859765662s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (129.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20211117121757-2067
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20211117121757-2067: (2.759031187s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117122048-2067 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117122048-2067 "sudo systemctl is-active --quiet service kubelet": exit status 80 (194.217823ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-20211117122048-2067": docker container inspect NoKubernetes-20211117122048-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117122048-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_a637006dfde1245e93469fe3227a30492e7a4c9f_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (8.67s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20211117122013-2067 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-20211117122013-2067 --alsologtostderr -v=5: (8.67416267s)
--- PASS: TestPause/serial/DeletePaused (8.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:166: (dbg) Run:  docker ps -a
pause_test.go:171: (dbg) Run:  docker volume inspect pause-20211117122013-2067
pause_test.go:171: (dbg) Non-zero exit: docker volume inspect pause-20211117122013-2067: exit status 1 (101.327889ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20211117122013-2067

                                                
                                                
** /stderr **
pause_test.go:176: (dbg) Run:  sudo docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117122048-2067 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117122048-2067 "sudo systemctl is-active --quiet service kubelet": exit status 80 (223.049651ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-20211117122048-2067": docker container inspect NoKubernetes-20211117122048-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20211117122048-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_a637006dfde1245e93469fe3227a30492e7a4c9f_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.45s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.45s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.87s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20211117123459-2067 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (17/236)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:491: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20211117115319-2067 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest2108955808:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1637178998127731000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest2108955808/created-by-test
functional_test_mount_test.go:110: wrote "test-1637178998127731000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest2108955808/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1637178998127731000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest2108955808/test-1637178998127731000
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (248.901932ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_mount_0e37776d87a7c09ef62cf37a3627f00495636671_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (199.980905ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (199.352072ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (201.315163ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (200.797815ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (196.780918ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (218.587253ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:126: skipping: mount did not appear, likely because macOS requires prompt to allow non-codesigned binaries to listen on non-localhost port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:93: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo umount -f /mount-9p": exit status 80 (189.973461ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_98b14adcd82ee1c7752a4e4be782b00e25555f68_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:95: "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo umount -f /mount-9p\"": exit status 80
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211117115319-2067 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest2108955808:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20211117115319-2067 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest2627620758:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (244.635384ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_mount_ba9e4e76eedcc056e3ec59a5dbf6b0bd31d769b6_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (191.820602ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (196.773354ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (200.61721ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (197.867499ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (200.552179ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (193.228716ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (193.669877ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:263: skipping: mount did not appear, likely because macOS requires prompt to allow non-codesigned binaries to listen on non-localhost port
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh "sudo umount -f /mount-9p": exit status 80 (187.033859ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117115319-2067": docker container inspect functional-20211117115319-2067 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117115319-2067
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_98b14adcd82ee1c7752a4e4be782b00e25555f68_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-darwin-amd64 -p functional-20211117115319-2067 ssh \"sudo umount -f /mount-9p\"": exit status 80
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211117115319-2067 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest2627620758:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20211117121607-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20211117121607-2067
--- SKIP: TestNetworkPlugins/group/flannel (0.78s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20211117123459-2067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20211117123459-2067
--- SKIP: TestStartStop/group/disable-driver-mounts (0.62s)

                                                
                                    
Copied to clipboard