Test Report: Docker_macOS 12739

                    
                      80e07762e28b592b48b4aeaf3aab89efbbe303e1:2021-11-17:21391
                    
                

Test fail (161/236)

Order failed test Duration
4 TestDownloadOnly/v1.14.0/preload-exists 0.18
26 TestOffline 59.61
28 TestAddons/Setup 45.81
29 TestCertOptions 53.8
30 TestCertExpiration 301.24
31 TestDockerFlags 58.83
32 TestForceSystemdFlag 49.76
33 TestForceSystemdEnv 51.43
38 TestErrorSpam/setup 45.63
47 TestFunctional/serial/StartWithProxy 46.32
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 69.62
50 TestFunctional/serial/KubeContext 0.3
51 TestFunctional/serial/KubectlGetPods 0.3
54 TestFunctional/serial/CacheCmd/cache/add_remote 0.31
56 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.09
57 TestFunctional/serial/CacheCmd/cache/list 0.07
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.68
60 TestFunctional/serial/CacheCmd/cache/delete 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 0.71
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.76
63 TestFunctional/serial/ExtraConfig 69.67
64 TestFunctional/serial/ComponentHealth 0.3
65 TestFunctional/serial/LogsCmd 0.42
66 TestFunctional/serial/LogsFileCmd 0.42
69 TestFunctional/parallel/DashboardCmd 0.58
72 TestFunctional/parallel/StatusCmd 0.75
75 TestFunctional/parallel/ServiceCmd 0.41
77 TestFunctional/parallel/PersistentVolumeClaim 0.26
79 TestFunctional/parallel/SSHCmd 0.77
80 TestFunctional/parallel/CpCmd 0.5
81 TestFunctional/parallel/MySQL 0.29
82 TestFunctional/parallel/FileSync 0.47
83 TestFunctional/parallel/CertSync 1.45
87 TestFunctional/parallel/NodeLabels 0.29
89 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
92 TestFunctional/parallel/Version/components 0.2
93 TestFunctional/parallel/ImageCommands/ImageList 0.17
94 TestFunctional/parallel/ImageCommands/ImageBuild 0.55
96 TestFunctional/parallel/DockerEnv/bash 0.21
97 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
98 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.45
99 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
102 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.34
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 74.56
109 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.18
111 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.35
124 TestIngressAddonLegacy/StartLegacyK8sCluster 52.58
126 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 0.63
128 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.26
131 TestJSONOutput/start/Command 45.19
132 TestJSONOutput/start/Audit 0
134 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
135 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
137 TestJSONOutput/pause/Command 0.16
138 TestJSONOutput/pause/Audit 0
143 TestJSONOutput/unpause/Command 0.42
144 TestJSONOutput/unpause/Audit 0
149 TestJSONOutput/stop/Command 14.74
150 TestJSONOutput/stop/Audit 0
152 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
156 TestKicCustomNetwork/create_custom_network 95.32
162 TestMountStart/serial/StartWithMountFirst 46.1
163 TestMountStart/serial/StartWithMountSecond 46.6
164 TestMountStart/serial/VerifyMountFirst 0.53
165 TestMountStart/serial/VerifyMountSecond 0.47
167 TestMountStart/serial/VerifyMountPostDelete 0.48
168 TestMountStart/serial/Stop 15.05
169 TestMountStart/serial/RestartStopped 67.29
170 TestMountStart/serial/VerifyMountPostStop 0.55
173 TestMultiNode/serial/FreshStart2Nodes 46.43
174 TestMultiNode/serial/DeployApp2Nodes 0.76
175 TestMultiNode/serial/PingHostFrom2Pods 0.34
176 TestMultiNode/serial/AddNode 0.48
177 TestMultiNode/serial/ProfileList 0.58
178 TestMultiNode/serial/CopyFile 0.44
179 TestMultiNode/serial/StopNode 0.67
180 TestMultiNode/serial/StartAfterStop 0.61
181 TestMultiNode/serial/RestartKeepsNodes 84.9
182 TestMultiNode/serial/DeleteNode 0.71
183 TestMultiNode/serial/StopMultiNode 15.37
184 TestMultiNode/serial/RestartMultiNode 69.67
185 TestMultiNode/serial/ValidateNameConflict 102.97
189 TestPreload 49.52
191 TestScheduledStopUnix 48.69
192 TestSkaffold 52.29
194 TestInsufficientStorage 13.11
197 TestKubernetesUpgrade 116.56
198 TestMissingContainerUpgrade 228.64
213 TestStoppedBinaryUpgrade/Upgrade 115.02
215 TestPause/serial/Start 0.66
216 TestPause/serial/SecondStartNoReconfiguration 0.64
217 TestPause/serial/Pause 0.51
218 TestPause/serial/VerifyStatus 0.25
219 TestPause/serial/Unpause 0.53
220 TestPause/serial/PauseAgain 0.52
222 TestPause/serial/VerifyDeletedResources 1.26
231 TestNoKubernetes/serial/Start 0.67
233 TestNoKubernetes/serial/ProfileList 0.63
234 TestNoKubernetes/serial/Stop 0.37
235 TestNoKubernetes/serial/StartNoArgs 0.64
237 TestNetworkPlugins/group/auto/Start 0.46
238 TestNetworkPlugins/group/kindnet/Start 0.44
239 TestNetworkPlugins/group/false/Start 0.43
240 TestNetworkPlugins/group/enable-default-cni/Start 0.41
241 TestNetworkPlugins/group/bridge/Start 0.46
242 TestNetworkPlugins/group/kubenet/Start 0.44
243 TestNetworkPlugins/group/calico/Start 0.43
244 TestNetworkPlugins/group/cilium/Start 0.43
245 TestNetworkPlugins/group/custom-weave/Start 0.41
247 TestStartStop/group/old-k8s-version/serial/FirstStart 0.67
248 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
249 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.35
250 TestStartStop/group/old-k8s-version/serial/Stop 0.31
251 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.42
252 TestStartStop/group/old-k8s-version/serial/SecondStart 0.64
253 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.21
254 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.31
255 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
256 TestStartStop/group/old-k8s-version/serial/Pause 0.55
257 TestStoppedBinaryUpgrade/MinikubeLogs 0.43
259 TestStartStop/group/no-preload/serial/FirstStart 0.71
260 TestStartStop/group/no-preload/serial/DeployApp 0.65
262 TestStartStop/group/default-k8s-different-port/serial/FirstStart 0.76
263 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.54
264 TestStartStop/group/default-k8s-different-port/serial/DeployApp 0.6
265 TestStartStop/group/no-preload/serial/Stop 0.35
266 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.51
267 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.49
268 TestStartStop/group/no-preload/serial/SecondStart 0.76
269 TestStartStop/group/default-k8s-different-port/serial/Stop 0.48
270 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.49
271 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.23
272 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.36
273 TestStartStop/group/default-k8s-different-port/serial/SecondStart 0.8
274 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
275 TestStartStop/group/no-preload/serial/Pause 0.61
276 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 0.27
277 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 0.3
278 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.33
279 TestStartStop/group/default-k8s-different-port/serial/Pause 0.57
281 TestStartStop/group/newest-cni/serial/FirstStart 0.69
283 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.34
284 TestStartStop/group/newest-cni/serial/Stop 0.46
286 TestStartStop/group/embed-certs/serial/FirstStart 0.78
287 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.55
288 TestStartStop/group/embed-certs/serial/DeployApp 0.72
289 TestStartStop/group/newest-cni/serial/SecondStart 0.82
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.41
293 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
294 TestStartStop/group/embed-certs/serial/Stop 0.41
295 TestStartStop/group/newest-cni/serial/Pause 0.59
296 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.5
297 TestStartStop/group/embed-certs/serial/SecondStart 0.7
298 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.24
299 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.29
300 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
301 TestStartStop/group/embed-certs/serial/Pause 0.52
x
+
TestDownloadOnly/v1.14.0/preload-exists (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
aaa_download_only_test.go:105: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.14.0/preload-exists (0.18s)

                                                
                                    
x
+
TestOffline (59.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20211117144907-2140 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-20211117144907-2140 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 80 (46.812943198s)

                                                
                                                
-- stdout --
	* [offline-docker-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node offline-docker-20211117144907-2140 in cluster offline-docker-20211117144907-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-20211117144907-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:49:07.373460    9816 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:49:07.373603    9816 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:49:07.373608    9816 out.go:310] Setting ErrFile to fd 2...
	I1117 14:49:07.373612    9816 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:49:07.373705    9816 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:49:07.374031    9816 out.go:304] Setting JSON to false
	I1117 14:49:07.400028    9816 start.go:112] hostinfo: {"hostname":"37310.local","uptime":2922,"bootTime":1637186425,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:49:07.400174    9816 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:49:07.427508    9816 out.go:176] * [offline-docker-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:49:07.427748    9816 notify.go:174] Checking for updates...
	I1117 14:49:07.454480    9816 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:49:07.480950    9816 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:49:07.507071    9816 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:49:07.532862    9816 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:49:07.533291    9816 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:49:07.533339    9816 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:49:07.648935    9816 docker.go:132] docker version: linux-20.10.6
	I1117 14:49:07.649086    9816 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:49:07.844319    9816 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:49:07.765337812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:49:07.870476    9816 out.go:176] * Using the docker driver based on user configuration
	I1117 14:49:07.870539    9816 start.go:280] selected driver: docker
	I1117 14:49:07.870571    9816 start.go:775] validating driver "docker" against <nil>
	I1117 14:49:07.870592    9816 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:49:07.873939    9816 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:49:08.069567    9816 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:49:07.992753537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:49:08.069676    9816 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:49:08.069797    9816 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 14:49:08.069815    9816 cni.go:93] Creating CNI manager for ""
	I1117 14:49:08.069822    9816 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:49:08.069827    9816 start_flags.go:282] config:
	{Name:offline-docker-20211117144907-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:offline-docker-20211117144907-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:49:08.096870    9816 out.go:176] * Starting control plane node offline-docker-20211117144907-2140 in cluster offline-docker-20211117144907-2140
	I1117 14:49:08.096998    9816 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:49:08.123427    9816 out.go:176] * Pulling base image ...
	I1117 14:49:08.123462    9816 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:49:08.123506    9816 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:49:08.123513    9816 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:49:08.123531    9816 cache.go:57] Caching tarball of preloaded images
	I1117 14:49:08.123660    9816 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:49:08.123676    9816 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:49:08.124335    9816 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/offline-docker-20211117144907-2140/config.json ...
	I1117 14:49:08.124435    9816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/offline-docker-20211117144907-2140/config.json: {Name:mk899682e96e206be49ae7e5f6de4b9e378382e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:49:08.307788    9816 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:49:08.307823    9816 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:49:08.307837    9816 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:49:08.307873    9816 start.go:313] acquiring machines lock for offline-docker-20211117144907-2140: {Name:mk863ba803189fe23f2805a0078427bb0ba7e422 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:49:08.308019    9816 start.go:317] acquired machines lock for "offline-docker-20211117144907-2140" in 131.728µs
	I1117 14:49:08.308052    9816 start.go:89] Provisioning new machine with config: &{Name:offline-docker-20211117144907-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:offline-docker-20211117144907-2140 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 14:49:08.308117    9816 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:49:08.334724    9816 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 14:49:08.335220    9816 start.go:160] libmachine.API.Create for "offline-docker-20211117144907-2140" (driver="docker")
	I1117 14:49:08.335369    9816 client.go:168] LocalClient.Create starting
	I1117 14:49:08.335646    9816 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:49:08.355479    9816 main.go:130] libmachine: Decoding PEM data...
	I1117 14:49:08.355529    9816 main.go:130] libmachine: Parsing certificate...
	I1117 14:49:08.355616    9816 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:49:08.355689    9816 main.go:130] libmachine: Decoding PEM data...
	I1117 14:49:08.355710    9816 main.go:130] libmachine: Parsing certificate...
	I1117 14:49:08.356640    9816 cli_runner.go:115] Run: docker network inspect offline-docker-20211117144907-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:49:08.524814    9816 cli_runner.go:162] docker network inspect offline-docker-20211117144907-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:49:08.524921    9816 network_create.go:254] running [docker network inspect offline-docker-20211117144907-2140] to gather additional debugging logs...
	I1117 14:49:08.524941    9816 cli_runner.go:115] Run: docker network inspect offline-docker-20211117144907-2140
	W1117 14:49:08.709395    9816 cli_runner.go:162] docker network inspect offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:08.709430    9816 network_create.go:257] error running [docker network inspect offline-docker-20211117144907-2140]: docker network inspect offline-docker-20211117144907-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20211117144907-2140
	I1117 14:49:08.709447    9816 network_create.go:259] output of [docker network inspect offline-docker-20211117144907-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20211117144907-2140
	
	** /stderr **
	I1117 14:49:08.709574    9816 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:49:08.871375    9816 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000ac8490] misses:0}
	I1117 14:49:08.871414    9816 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:08.871431    9816 network_create.go:106] attempt to create docker network offline-docker-20211117144907-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:49:08.871524    9816 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117144907-2140
	I1117 14:49:12.747079    9816 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117144907-2140: (3.875437878s)
	I1117 14:49:12.747113    9816 network_create.go:90] docker network offline-docker-20211117144907-2140 192.168.49.0/24 created
	I1117 14:49:12.747155    9816 kic.go:106] calculated static IP "192.168.49.2" for the "offline-docker-20211117144907-2140" container
	I1117 14:49:12.747295    9816 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:49:12.870470    9816 cli_runner.go:115] Run: docker volume create offline-docker-20211117144907-2140 --label name.minikube.sigs.k8s.io=offline-docker-20211117144907-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:49:13.021168    9816 oci.go:102] Successfully created a docker volume offline-docker-20211117144907-2140
	I1117 14:49:13.021303    9816 cli_runner.go:115] Run: docker run --rm --name offline-docker-20211117144907-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20211117144907-2140 --entrypoint /usr/bin/test -v offline-docker-20211117144907-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:49:13.701192    9816 oci.go:106] Successfully prepared a docker volume offline-docker-20211117144907-2140
	I1117 14:49:13.701264    9816 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	E1117 14:49:13.701268    9816 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:49:13.701288    9816 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:49:13.701299    9816 client.go:171] LocalClient.Create took 5.365842832s
	I1117 14:49:13.701412    9816 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117144907-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:49:15.705121    9816 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:49:15.705204    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:15.839745    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:15.839831    9816 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:16.116228    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:16.246717    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:16.246792    9816 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:16.788749    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:16.920105    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:16.920197    9816 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:17.580429    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:17.713347    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	W1117 14:49:17.713471    9816 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	
	W1117 14:49:17.713505    9816 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:17.713517    9816 start.go:129] duration metric: createHost completed in 9.405267177s
	I1117 14:49:17.713524    9816 start.go:80] releasing machines lock for "offline-docker-20211117144907-2140", held for 9.405369982s
	W1117 14:49:17.713541    9816 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:49:17.714114    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:17.844397    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:17.844444    9816 delete.go:82] Unable to get host status for offline-docker-20211117144907-2140, assuming it has already been deleted: state: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	W1117 14:49:17.844603    9816 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:49:17.844615    9816 start.go:547] Will try again in 5 seconds ...
	I1117 14:49:19.814951    9816 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117144907-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.113407582s)
	I1117 14:49:19.814968    9816 kic.go:188] duration metric: took 6.113599 seconds to extract preloaded images to volume
	I1117 14:49:22.852936    9816 start.go:313] acquiring machines lock for offline-docker-20211117144907-2140: {Name:mk863ba803189fe23f2805a0078427bb0ba7e422 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:49:22.853080    9816 start.go:317] acquired machines lock for "offline-docker-20211117144907-2140" in 115.034µs
	I1117 14:49:22.853144    9816 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:49:22.853157    9816 fix.go:55] fixHost starting: 
	I1117 14:49:22.853640    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:22.966792    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:22.966836    9816 fix.go:108] recreateIfNeeded on offline-docker-20211117144907-2140: state= err=unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:22.966865    9816 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:49:23.015652    9816 out.go:176] * docker "offline-docker-20211117144907-2140" container is missing, will recreate.
	I1117 14:49:23.015711    9816 delete.go:124] DEMOLISHING offline-docker-20211117144907-2140 ...
	I1117 14:49:23.015942    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:23.131334    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:49:23.131372    9816 stop.go:75] unable to get state: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:23.131389    9816 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:23.131785    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:23.243199    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:23.243238    9816 delete.go:82] Unable to get host status for offline-docker-20211117144907-2140, assuming it has already been deleted: state: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:23.243325    9816 cli_runner.go:115] Run: docker container inspect -f {{.Id}} offline-docker-20211117144907-2140
	W1117 14:49:23.353789    9816 cli_runner.go:162] docker container inspect -f {{.Id}} offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:23.353817    9816 kic.go:360] could not find the container offline-docker-20211117144907-2140 to remove it. will try anyways
	I1117 14:49:23.353904    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:23.465239    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:49:23.465275    9816 oci.go:83] error getting container status, will try to delete anyways: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:23.465358    9816 cli_runner.go:115] Run: docker exec --privileged -t offline-docker-20211117144907-2140 /bin/bash -c "sudo init 0"
	W1117 14:49:23.577550    9816 cli_runner.go:162] docker exec --privileged -t offline-docker-20211117144907-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:49:23.577579    9816 oci.go:658] error shutdown offline-docker-20211117144907-2140: docker exec --privileged -t offline-docker-20211117144907-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:24.580180    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:24.689423    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:24.689460    9816 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:24.689469    9816 oci.go:672] temporary error: container offline-docker-20211117144907-2140 status is  but expect it to be exited
	I1117 14:49:24.689490    9816 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:25.155156    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:25.356298    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:25.356346    9816 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:25.356358    9816 oci.go:672] temporary error: container offline-docker-20211117144907-2140 status is  but expect it to be exited
	I1117 14:49:25.356387    9816 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:26.255182    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:26.374009    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:26.374052    9816 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:26.374064    9816 oci.go:672] temporary error: container offline-docker-20211117144907-2140 status is  but expect it to be exited
	I1117 14:49:26.374090    9816 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:27.012627    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:27.122846    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:27.122883    9816 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:27.122892    9816 oci.go:672] temporary error: container offline-docker-20211117144907-2140 status is  but expect it to be exited
	I1117 14:49:27.122914    9816 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:28.239766    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:28.350132    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:28.350200    9816 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:28.350209    9816 oci.go:672] temporary error: container offline-docker-20211117144907-2140 status is  but expect it to be exited
	I1117 14:49:28.350231    9816 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:29.862722    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:29.972351    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:29.972389    9816 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:29.972399    9816 oci.go:672] temporary error: container offline-docker-20211117144907-2140 status is  but expect it to be exited
	I1117 14:49:29.972420    9816 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:33.013631    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:33.130792    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:33.130883    9816 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:33.130901    9816 oci.go:672] temporary error: container offline-docker-20211117144907-2140 status is  but expect it to be exited
	I1117 14:49:33.130962    9816 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:38.914461    9816 cli_runner.go:115] Run: docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}
	W1117 14:49:39.038684    9816 cli_runner.go:162] docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:39.038727    9816 oci.go:670] temporary error verifying shutdown: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:39.038756    9816 oci.go:672] temporary error: container offline-docker-20211117144907-2140 status is  but expect it to be exited
	I1117 14:49:39.038787    9816 oci.go:87] couldn't shut down offline-docker-20211117144907-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	 
	I1117 14:49:39.038882    9816 cli_runner.go:115] Run: docker rm -f -v offline-docker-20211117144907-2140
	I1117 14:49:39.149192    9816 cli_runner.go:115] Run: docker container inspect -f {{.Id}} offline-docker-20211117144907-2140
	W1117 14:49:39.258091    9816 cli_runner.go:162] docker container inspect -f {{.Id}} offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:39.258197    9816 cli_runner.go:115] Run: docker network inspect offline-docker-20211117144907-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:49:39.372562    9816 cli_runner.go:115] Run: docker network rm offline-docker-20211117144907-2140
	I1117 14:49:42.161311    9816 cli_runner.go:168] Completed: docker network rm offline-docker-20211117144907-2140: (2.788667863s)
	W1117 14:49:42.161597    9816 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:49:42.161604    9816 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:49:43.161866    9816 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:49:43.189044    9816 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 14:49:43.189261    9816 start.go:160] libmachine.API.Create for "offline-docker-20211117144907-2140" (driver="docker")
	I1117 14:49:43.189331    9816 client.go:168] LocalClient.Create starting
	I1117 14:49:43.189535    9816 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:49:43.189646    9816 main.go:130] libmachine: Decoding PEM data...
	I1117 14:49:43.189701    9816 main.go:130] libmachine: Parsing certificate...
	I1117 14:49:43.189834    9816 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:49:43.210868    9816 main.go:130] libmachine: Decoding PEM data...
	I1117 14:49:43.210926    9816 main.go:130] libmachine: Parsing certificate...
	I1117 14:49:43.212066    9816 cli_runner.go:115] Run: docker network inspect offline-docker-20211117144907-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:49:43.323067    9816 cli_runner.go:162] docker network inspect offline-docker-20211117144907-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:49:43.323157    9816 network_create.go:254] running [docker network inspect offline-docker-20211117144907-2140] to gather additional debugging logs...
	I1117 14:49:43.323173    9816 cli_runner.go:115] Run: docker network inspect offline-docker-20211117144907-2140
	W1117 14:49:43.434416    9816 cli_runner.go:162] docker network inspect offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:43.434440    9816 network_create.go:257] error running [docker network inspect offline-docker-20211117144907-2140]: docker network inspect offline-docker-20211117144907-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: offline-docker-20211117144907-2140
	I1117 14:49:43.434453    9816 network_create.go:259] output of [docker network inspect offline-docker-20211117144907-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: offline-docker-20211117144907-2140
	
	** /stderr **
	I1117 14:49:43.434539    9816 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:49:43.548218    9816 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac8490] amended:false}} dirty:map[] misses:0}
	I1117 14:49:43.548256    9816 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:43.548461    9816 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac8490] amended:true}} dirty:map[192.168.49.0:0xc000ac8490 192.168.58.0:0xc00078a018] misses:0}
	I1117 14:49:43.548483    9816 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:43.548492    9816 network_create.go:106] attempt to create docker network offline-docker-20211117144907-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:49:43.548576    9816 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117144907-2140
	W1117 14:49:43.663108    9816 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117144907-2140 returned with exit code 1
	W1117 14:49:43.663160    9816 network_create.go:98] failed to create docker network offline-docker-20211117144907-2140 192.168.58.0/24, will retry: subnet is taken
	I1117 14:49:43.663391    9816 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac8490] amended:true}} dirty:map[192.168.49.0:0xc000ac8490 192.168.58.0:0xc00078a018] misses:1}
	I1117 14:49:43.663425    9816 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:43.663647    9816 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ac8490] amended:true}} dirty:map[192.168.49.0:0xc000ac8490 192.168.58.0:0xc00078a018 192.168.67.0:0xc0001a43b0] misses:1}
	I1117 14:49:43.663660    9816 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:43.663669    9816 network_create.go:106] attempt to create docker network offline-docker-20211117144907-2140 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 14:49:43.663742    9816 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117144907-2140
	I1117 14:49:47.521506    9816 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true offline-docker-20211117144907-2140: (3.857663487s)
	I1117 14:49:47.521528    9816 network_create.go:90] docker network offline-docker-20211117144907-2140 192.168.67.0/24 created
	I1117 14:49:47.521540    9816 kic.go:106] calculated static IP "192.168.67.2" for the "offline-docker-20211117144907-2140" container
	I1117 14:49:47.521642    9816 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:49:47.637720    9816 cli_runner.go:115] Run: docker volume create offline-docker-20211117144907-2140 --label name.minikube.sigs.k8s.io=offline-docker-20211117144907-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:49:47.749994    9816 oci.go:102] Successfully created a docker volume offline-docker-20211117144907-2140
	I1117 14:49:47.750157    9816 cli_runner.go:115] Run: docker run --rm --name offline-docker-20211117144907-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-20211117144907-2140 --entrypoint /usr/bin/test -v offline-docker-20211117144907-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:49:48.190555    9816 oci.go:106] Successfully prepared a docker volume offline-docker-20211117144907-2140
	E1117 14:49:48.190613    9816 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:49:48.190619    9816 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:49:48.190626    9816 client.go:171] LocalClient.Create took 5.001218098s
	I1117 14:49:48.190635    9816 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:49:48.190754    9816 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-20211117144907-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:49:50.191097    9816 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:49:50.191255    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:50.335840    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:50.335950    9816 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:50.523587    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:50.658361    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:50.658438    9816 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:50.992789    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:51.125939    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:51.126026    9816 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:51.592161    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:51.722881    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	W1117 14:49:51.722977    9816 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	
	W1117 14:49:51.723004    9816 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:51.723025    9816 start.go:129] duration metric: createHost completed in 8.561002354s
	I1117 14:49:51.723100    9816 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:49:51.723164    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:51.852075    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:51.852153    9816 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:52.055713    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:52.196105    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:52.196205    9816 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:52.496517    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:52.634060    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	I1117 14:49:52.634181    9816 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:53.306457    9816 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140
	W1117 14:49:53.856185    9816 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140 returned with exit code 1
	W1117 14:49:53.856314    9816 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	
	W1117 14:49:53.856374    9816 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-20211117144907-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-20211117144907-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140
	I1117 14:49:53.856391    9816 fix.go:57] fixHost completed within 31.002816446s
	I1117 14:49:53.856407    9816 start.go:80] releasing machines lock for "offline-docker-20211117144907-2140", held for 31.002899784s
	W1117 14:49:53.856644    9816 out.go:241] * Failed to start docker container. Running "minikube delete -p offline-docker-20211117144907-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p offline-docker-20211117144907-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:49:53.996459    9816 out.go:176] 
	W1117 14:49:53.996579    9816 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:49:53.996597    9816 out.go:241] * 
	* 
	W1117 14:49:53.997412    9816 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:49:54.123897    9816 out.go:176] 

                                                
                                                
** /stderr **
aab_offline_test.go:59: out/minikube-darwin-amd64 start -p offline-docker-20211117144907-2140 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 80
panic.go:642: *** TestOffline FAILED at 2021-11-17 14:49:54.153704 -0800 PST m=+1592.227492120
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-20211117144907-2140
helpers_test.go:235: (dbg) docker inspect offline-docker-20211117144907-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-20211117144907-2140",
	        "Id": "758b9dcfed1d3fa2da427624486a203e71165cd2eddc184dc6168286dcae1294",
	        "Created": "2021-11-17T22:49:43.777198241Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-20211117144907-2140 -n offline-docker-20211117144907-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-20211117144907-2140 -n offline-docker-20211117144907-2140: exit status 7 (150.602239ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:49:54.420090   10266 status.go:247] status error: host: state: unknown state "offline-docker-20211117144907-2140": docker container inspect offline-docker-20211117144907-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: offline-docker-20211117144907-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-20211117144907-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-20211117144907-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20211117144907-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20211117144907-2140: (12.521741474s)
--- FAIL: TestOffline (59.61s)

                                                
                                    
x
+
TestAddons/Setup (45.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20211117142420-2140 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p addons-20211117142420-2140 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 80 (45.799227234s)

                                                
                                                
-- stdout --
	* [addons-20211117142420-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node addons-20211117142420-2140 in cluster addons-20211117142420-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "addons-20211117142420-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:24:20.295233    2410 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:24:20.295412    2410 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:24:20.295418    2410 out.go:310] Setting ErrFile to fd 2...
	I1117 14:24:20.295421    2410 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:24:20.295494    2410 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:24:20.295803    2410 out.go:304] Setting JSON to false
	I1117 14:24:20.319808    2410 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1435,"bootTime":1637186425,"procs":342,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:24:20.319894    2410 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:24:20.346933    2410 out.go:176] * [addons-20211117142420-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:24:20.347140    2410 notify.go:174] Checking for updates...
	I1117 14:24:20.395740    2410 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:24:20.421642    2410 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:24:20.449596    2410 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:24:20.475539    2410 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:24:20.475747    2410 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:24:20.566629    2410 docker.go:132] docker version: linux-20.10.6
	I1117 14:24:20.566770    2410 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:24:20.744391    2410 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-11-17 22:24:20.68322419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:24:20.793123    2410 out.go:176] * Using the docker driver based on user configuration
	I1117 14:24:20.793168    2410 start.go:280] selected driver: docker
	I1117 14:24:20.793181    2410 start.go:775] validating driver "docker" against <nil>
	I1117 14:24:20.793211    2410 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:24:20.796435    2410 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:24:20.972655    2410 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-11-17 22:24:20.911833289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:24:20.972812    2410 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:24:20.972965    2410 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 14:24:20.972989    2410 cni.go:93] Creating CNI manager for ""
	I1117 14:24:20.973009    2410 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:24:20.973019    2410 start_flags.go:282] config:
	{Name:addons-20211117142420-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:addons-20211117142420-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:24:21.021632    2410 out.go:176] * Starting control plane node addons-20211117142420-2140 in cluster addons-20211117142420-2140
	I1117 14:24:21.021707    2410 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:24:21.047796    2410 out.go:176] * Pulling base image ...
	I1117 14:24:21.047871    2410 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:24:21.047956    2410 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:24:21.047977    2410 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:24:21.047984    2410 cache.go:57] Caching tarball of preloaded images
	I1117 14:24:21.048191    2410 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:24:21.048232    2410 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:24:21.050342    2410 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/addons-20211117142420-2140/config.json ...
	I1117 14:24:21.050545    2410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/addons-20211117142420-2140/config.json: {Name:mkc454b234bffc27cdceb1eb466fc83207132b11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:24:21.166205    2410 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:24:21.166223    2410 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:24:21.166255    2410 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:24:21.166290    2410 start.go:313] acquiring machines lock for addons-20211117142420-2140: {Name:mk6dd5895d4730ce40981900a1ca93063aff3c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:24:21.166538    2410 start.go:317] acquired machines lock for "addons-20211117142420-2140" in 220.554µs
	I1117 14:24:21.166568    2410 start.go:89] Provisioning new machine with config: &{Name:addons-20211117142420-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:addons-20211117142420-2140 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 14:24:21.166648    2410 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:24:21.193378    2410 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 14:24:21.193685    2410 start.go:160] libmachine.API.Create for "addons-20211117142420-2140" (driver="docker")
	I1117 14:24:21.193734    2410 client.go:168] LocalClient.Create starting
	I1117 14:24:21.194025    2410 main.go:130] libmachine: Creating CA: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:24:21.238348    2410 main.go:130] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:24:21.427716    2410 cli_runner.go:115] Run: docker network inspect addons-20211117142420-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:24:21.539950    2410 cli_runner.go:162] docker network inspect addons-20211117142420-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:24:21.540066    2410 network_create.go:254] running [docker network inspect addons-20211117142420-2140] to gather additional debugging logs...
	I1117 14:24:21.540086    2410 cli_runner.go:115] Run: docker network inspect addons-20211117142420-2140
	W1117 14:24:21.645804    2410 cli_runner.go:162] docker network inspect addons-20211117142420-2140 returned with exit code 1
	I1117 14:24:21.645829    2410 network_create.go:257] error running [docker network inspect addons-20211117142420-2140]: docker network inspect addons-20211117142420-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20211117142420-2140
	I1117 14:24:21.645843    2410 network_create.go:259] output of [docker network inspect addons-20211117142420-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20211117142420-2140
	
	** /stderr **
	I1117 14:24:21.645937    2410 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:24:21.754945    2410 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a900f0] misses:0}
	I1117 14:24:21.754988    2410 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:24:21.755006    2410 network_create.go:106] attempt to create docker network addons-20211117142420-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:24:21.755093    2410 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117142420-2140
	I1117 14:24:25.563884    2410 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117142420-2140: (3.808653031s)
	I1117 14:24:25.563910    2410 network_create.go:90] docker network addons-20211117142420-2140 192.168.49.0/24 created
	I1117 14:24:25.563931    2410 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20211117142420-2140" container
	I1117 14:24:25.564042    2410 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:24:25.672811    2410 cli_runner.go:115] Run: docker volume create addons-20211117142420-2140 --label name.minikube.sigs.k8s.io=addons-20211117142420-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:24:25.781859    2410 oci.go:102] Successfully created a docker volume addons-20211117142420-2140
	I1117 14:24:25.781984    2410 cli_runner.go:115] Run: docker run --rm --name addons-20211117142420-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211117142420-2140 --entrypoint /usr/bin/test -v addons-20211117142420-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:24:26.554523    2410 oci.go:106] Successfully prepared a docker volume addons-20211117142420-2140
	I1117 14:24:26.554583    2410 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	E1117 14:24:26.554586    2410 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:24:26.554604    2410 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:24:26.554618    2410 client.go:171] LocalClient.Create took 5.360748348s
	I1117 14:24:26.554710    2410 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117142420-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:24:28.555001    2410 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:24:28.555100    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:24:28.674494    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:24:28.674583    2410 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:28.951056    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:24:29.086844    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:24:29.086935    2410 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:29.627517    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:24:29.741562    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:24:29.741636    2410 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:30.397271    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:24:30.516593    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	W1117 14:24:30.516687    2410 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	
	W1117 14:24:30.516714    2410 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:30.516734    2410 start.go:129] duration metric: createHost completed in 9.349861632s
	I1117 14:24:30.516743    2410 start.go:80] releasing machines lock for "addons-20211117142420-2140", held for 9.349979551s
	W1117 14:24:30.516784    2410 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:24:30.517483    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:30.634100    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:30.634148    2410 delete.go:82] Unable to get host status for addons-20211117142420-2140, assuming it has already been deleted: state: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	W1117 14:24:30.634298    2410 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:24:30.634311    2410 start.go:547] Will try again in 5 seconds ...
	I1117 14:24:32.558231    2410 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117142420-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.00335944s)
	I1117 14:24:32.558247    2410 kic.go:188] duration metric: took 6.003505 seconds to extract preloaded images to volume
	I1117 14:24:35.644447    2410 start.go:313] acquiring machines lock for addons-20211117142420-2140: {Name:mk6dd5895d4730ce40981900a1ca93063aff3c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:24:35.644640    2410 start.go:317] acquired machines lock for "addons-20211117142420-2140" in 159.551µs
	I1117 14:24:35.644709    2410 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:24:35.644722    2410 fix.go:55] fixHost starting: 
	I1117 14:24:35.645195    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:35.760216    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:35.760272    2410 fix.go:108] recreateIfNeeded on addons-20211117142420-2140: state= err=unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:35.760289    2410 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:24:35.787558    2410 out.go:176] * docker "addons-20211117142420-2140" container is missing, will recreate.
	I1117 14:24:35.787608    2410 delete.go:124] DEMOLISHING addons-20211117142420-2140 ...
	I1117 14:24:35.787939    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:35.897160    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:24:35.897203    2410 stop.go:75] unable to get state: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:35.897221    2410 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:35.897661    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:36.006154    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:36.006226    2410 delete.go:82] Unable to get host status for addons-20211117142420-2140, assuming it has already been deleted: state: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:36.006342    2410 cli_runner.go:115] Run: docker container inspect -f {{.Id}} addons-20211117142420-2140
	W1117 14:24:36.113607    2410 cli_runner.go:162] docker container inspect -f {{.Id}} addons-20211117142420-2140 returned with exit code 1
	I1117 14:24:36.113633    2410 kic.go:360] could not find the container addons-20211117142420-2140 to remove it. will try anyways
	I1117 14:24:36.113716    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:36.220423    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:24:36.220460    2410 oci.go:83] error getting container status, will try to delete anyways: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:36.220550    2410 cli_runner.go:115] Run: docker exec --privileged -t addons-20211117142420-2140 /bin/bash -c "sudo init 0"
	W1117 14:24:36.328738    2410 cli_runner.go:162] docker exec --privileged -t addons-20211117142420-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:24:36.328779    2410 oci.go:658] error shutdown addons-20211117142420-2140: docker exec --privileged -t addons-20211117142420-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:37.330754    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:37.442987    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:37.443027    2410 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:37.443037    2410 oci.go:672] temporary error: container addons-20211117142420-2140 status is  but expect it to be exited
	I1117 14:24:37.443058    2410 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:37.907139    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:38.019736    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:38.019774    2410 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:38.019783    2410 oci.go:672] temporary error: container addons-20211117142420-2140 status is  but expect it to be exited
	I1117 14:24:38.019805    2410 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:38.913181    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:39.028635    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:39.028677    2410 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:39.028687    2410 oci.go:672] temporary error: container addons-20211117142420-2140 status is  but expect it to be exited
	I1117 14:24:39.028707    2410 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:39.666611    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:39.779111    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:39.779148    2410 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:39.779156    2410 oci.go:672] temporary error: container addons-20211117142420-2140 status is  but expect it to be exited
	I1117 14:24:39.779176    2410 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:40.890147    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:40.999136    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:40.999183    2410 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:40.999195    2410 oci.go:672] temporary error: container addons-20211117142420-2140 status is  but expect it to be exited
	I1117 14:24:40.999220    2410 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:42.520692    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:42.633381    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:42.633423    2410 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:42.633432    2410 oci.go:672] temporary error: container addons-20211117142420-2140 status is  but expect it to be exited
	I1117 14:24:42.633452    2410 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:45.677301    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:45.788281    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:45.788321    2410 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:45.788328    2410 oci.go:672] temporary error: container addons-20211117142420-2140 status is  but expect it to be exited
	I1117 14:24:45.788350    2410 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:51.579948    2410 cli_runner.go:115] Run: docker container inspect addons-20211117142420-2140 --format={{.State.Status}}
	W1117 14:24:51.691753    2410 cli_runner.go:162] docker container inspect addons-20211117142420-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:24:51.691790    2410 oci.go:670] temporary error verifying shutdown: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:24:51.691797    2410 oci.go:672] temporary error: container addons-20211117142420-2140 status is  but expect it to be exited
	I1117 14:24:51.691821    2410 oci.go:87] couldn't shut down addons-20211117142420-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "addons-20211117142420-2140": docker container inspect addons-20211117142420-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	 
	I1117 14:24:51.691941    2410 cli_runner.go:115] Run: docker rm -f -v addons-20211117142420-2140
	I1117 14:24:51.803960    2410 cli_runner.go:115] Run: docker container inspect -f {{.Id}} addons-20211117142420-2140
	W1117 14:24:51.911034    2410 cli_runner.go:162] docker container inspect -f {{.Id}} addons-20211117142420-2140 returned with exit code 1
	I1117 14:24:51.911167    2410 cli_runner.go:115] Run: docker network inspect addons-20211117142420-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:24:52.019119    2410 cli_runner.go:115] Run: docker network rm addons-20211117142420-2140
	I1117 14:24:54.832356    2410 cli_runner.go:168] Completed: docker network rm addons-20211117142420-2140: (2.81311444s)
	W1117 14:24:54.832654    2410 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:24:54.832661    2410 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:24:55.842206    2410 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:24:55.869297    2410 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 14:24:55.869418    2410 start.go:160] libmachine.API.Create for "addons-20211117142420-2140" (driver="docker")
	I1117 14:24:55.869440    2410 client.go:168] LocalClient.Create starting
	I1117 14:24:55.869614    2410 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:24:55.869672    2410 main.go:130] libmachine: Decoding PEM data...
	I1117 14:24:55.869731    2410 main.go:130] libmachine: Parsing certificate...
	I1117 14:24:55.869833    2410 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:24:55.869885    2410 main.go:130] libmachine: Decoding PEM data...
	I1117 14:24:55.869898    2410 main.go:130] libmachine: Parsing certificate...
	I1117 14:24:55.891538    2410 cli_runner.go:115] Run: docker network inspect addons-20211117142420-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:24:56.001748    2410 cli_runner.go:162] docker network inspect addons-20211117142420-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:24:56.001865    2410 network_create.go:254] running [docker network inspect addons-20211117142420-2140] to gather additional debugging logs...
	I1117 14:24:56.001886    2410 cli_runner.go:115] Run: docker network inspect addons-20211117142420-2140
	W1117 14:24:56.112977    2410 cli_runner.go:162] docker network inspect addons-20211117142420-2140 returned with exit code 1
	I1117 14:24:56.113000    2410 network_create.go:257] error running [docker network inspect addons-20211117142420-2140]: docker network inspect addons-20211117142420-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20211117142420-2140
	I1117 14:24:56.113015    2410 network_create.go:259] output of [docker network inspect addons-20211117142420-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20211117142420-2140
	
	** /stderr **
	I1117 14:24:56.113111    2410 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:24:56.224778    2410 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a900f0] amended:false}} dirty:map[] misses:0}
	I1117 14:24:56.224808    2410 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:24:56.224988    2410 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a900f0] amended:true}} dirty:map[192.168.49.0:0xc000a900f0 192.168.58.0:0xc0001125b0] misses:0}
	I1117 14:24:56.224999    2410 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:24:56.225012    2410 network_create.go:106] attempt to create docker network addons-20211117142420-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:24:56.225097    2410 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117142420-2140
	I1117 14:25:00.065174    2410 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20211117142420-2140: (3.8399386s)
	I1117 14:25:00.065198    2410 network_create.go:90] docker network addons-20211117142420-2140 192.168.58.0/24 created
	I1117 14:25:00.065216    2410 kic.go:106] calculated static IP "192.168.58.2" for the "addons-20211117142420-2140" container
	I1117 14:25:00.065328    2410 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:25:00.172914    2410 cli_runner.go:115] Run: docker volume create addons-20211117142420-2140 --label name.minikube.sigs.k8s.io=addons-20211117142420-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:25:00.300048    2410 oci.go:102] Successfully created a docker volume addons-20211117142420-2140
	I1117 14:25:00.300180    2410 cli_runner.go:115] Run: docker run --rm --name addons-20211117142420-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20211117142420-2140 --entrypoint /usr/bin/test -v addons-20211117142420-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:25:00.709882    2410 oci.go:106] Successfully prepared a docker volume addons-20211117142420-2140
	E1117 14:25:00.709929    2410 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:25:00.709935    2410 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:25:00.709941    2410 client.go:171] LocalClient.Create took 4.840382901s
	I1117 14:25:00.709955    2410 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:25:00.710093    2410 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20211117142420-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:25:02.718417    2410 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:25:02.718560    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:25:02.917020    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:25:02.917121    2410 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:25:03.096144    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:25:03.219272    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:25:03.219397    2410 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:25:03.550247    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:25:03.667021    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:25:03.667113    2410 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:25:04.128113    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:25:04.247394    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	W1117 14:25:04.247504    2410 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	
	W1117 14:25:04.247526    2410 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:25:04.247540    2410 start.go:129] duration metric: createHost completed in 8.405078842s
	I1117 14:25:04.247634    2410 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:25:04.247730    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:25:04.363703    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:25:04.363783    2410 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:25:04.560335    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:25:04.687843    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:25:04.687924    2410 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:25:04.985522    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:25:05.114044    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	I1117 14:25:05.114126    2410 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:25:05.777614    2410 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140
	W1117 14:25:05.891950    2410 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140 returned with exit code 1
	W1117 14:25:05.892046    2410 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	
	W1117 14:25:05.892064    2410 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "addons-20211117142420-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20211117142420-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: addons-20211117142420-2140
	I1117 14:25:05.892103    2410 fix.go:57] fixHost completed within 30.246681592s
	I1117 14:25:05.892113    2410 start.go:80] releasing machines lock for "addons-20211117142420-2140", held for 30.246744925s
	W1117 14:25:05.892257    2410 out.go:241] * Failed to start docker container. Running "minikube delete -p addons-20211117142420-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p addons-20211117142420-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:25:05.955923    2410 out.go:176] 
	W1117 14:25:05.956061    2410 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:25:05.956071    2410 out.go:241] * 
	* 
	W1117 14:25:05.956846    2410 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:25:06.042688    2410 out.go:176] 

                                                
                                                
** /stderr **
addons_test.go:78: out/minikube-darwin-amd64 start -p addons-20211117142420-2140 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 80
--- FAIL: TestAddons/Setup (45.81s)

                                                
                                    
x
+
TestCertOptions (53.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20211117145115-2140 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-20211117145115-2140 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: exit status 80 (45.682683917s)

                                                
                                                
-- stdout --
	* [cert-options-20211117145115-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node cert-options-20211117145115-2140 in cluster cert-options-20211117145115-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-options-20211117145115-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:51:21.279120   11116 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:51:55.704662   11116 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-options-20211117145115-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:52: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-20211117145115-2140 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost" : exit status 80
cert_options_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20211117145115-2140 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-20211117145115-2140 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 80 (328.989862ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117145115-2140": docker container inspect cert-options-20211117145115-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117145115-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_c1f8366d59c5f8f6460a712ebd6036fcc73bcb99_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:63: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-20211117145115-2140 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 80
cert_options_test.go:70: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:70: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:70: apiserver cert does not include localhost in SAN.
cert_options_test.go:70: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:83: failed to inspect container for the port get port 8555 for "cert-options-20211117145115-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-20211117145115-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: cert-options-20211117145115-2140
cert_options_test.go:86: expected to get a non-zero forwarded port but got 0
cert_options_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20211117145115-2140 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-20211117145115-2140 -- "sudo cat /etc/kubernetes/admin.conf": exit status 80 (283.216586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117145115-2140": docker container inspect cert-options-20211117145115-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117145115-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_e59a677a82728474bde049b1a4510f5e357f9593_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:103: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-20211117145115-2140 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 80
cert_options_test.go:107: Internal minikube kubeconfig (admin.conf) does not containe the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "cert-options-20211117145115-2140": docker container inspect cert-options-20211117145115-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117145115-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_e59a677a82728474bde049b1a4510f5e357f9593_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:110: *** TestCertOptions FAILED at 2021-11-17 14:52:01.786098 -0800 PST m=+1719.858168877
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-20211117145115-2140
helpers_test.go:235: (dbg) docker inspect cert-options-20211117145115-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "cert-options-20211117145115-2140",
	        "Id": "0db97ed85078aca35d164639c25653a041ca75b631bb660cd0c3bf13b028c3b2",
	        "Created": "2021-11-17T22:51:51.300688363Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20211117145115-2140 -n cert-options-20211117145115-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-20211117145115-2140 -n cert-options-20211117145115-2140: exit status 7 (160.25779ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:52:02.077154   11503 status.go:247] status error: host: state: unknown state "cert-options-20211117145115-2140": docker container inspect cert-options-20211117145115-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-options-20211117145115-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-20211117145115-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-options-20211117145115-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20211117145115-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20211117145115-2140: (7.097458387s)
--- FAIL: TestCertOptions (53.80s)

                                                
                                    
x
+
TestCertExpiration (301.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20211117145056-2140 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-20211117145056-2140 --memory=2048 --cert-expiration=3m --driver=docker : exit status 80 (46.876301746s)

                                                
                                                
-- stdout --
	* [cert-expiration-20211117145056-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node cert-expiration-20211117145056-2140 in cluster cert-expiration-20211117145056-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117145056-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:51:06.528429   10916 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:51:38.193720   10916 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117145056-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:126: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-20211117145056-2140 --memory=2048 --cert-expiration=3m --driver=docker " : exit status 80

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20211117145056-2140 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:132: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-20211117145056-2140 --memory=2048 --cert-expiration=8760h --driver=docker : exit status 80 (1m7.528717702s)

                                                
                                                
-- stdout --
	* [cert-expiration-20211117145056-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20211117145056-2140 in cluster cert-expiration-20211117145056-2140
	* Pulling base image ...
	* docker "cert-expiration-20211117145056-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117145056-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:55:10.206408   12028 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:55:45.212884   12028 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117145056-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:134: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-20211117145056-2140 --memory=2048 --cert-expiration=8760h --driver=docker " : exit status 80
cert_options_test.go:137: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-20211117145056-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node cert-expiration-20211117145056-2140 in cluster cert-expiration-20211117145056-2140
	* Pulling base image ...
	* docker "cert-expiration-20211117145056-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cert-expiration-20211117145056-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:55:10.206408   12028 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:55:45.212884   12028 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p cert-expiration-20211117145056-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:139: *** TestCertExpiration FAILED at 2021-11-17 14:55:51.111557 -0800 PST m=+1949.180541754
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertExpiration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-expiration-20211117145056-2140
helpers_test.go:235: (dbg) docker inspect cert-expiration-20211117145056-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "cert-expiration-20211117145056-2140",
	        "Id": "76258aab23118f5208b4643669a5a88ed9cd7239dd8c7adc96adbd2a82d4477a",
	        "Created": "2021-11-17T22:55:40.412422684Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-20211117145056-2140 -n cert-expiration-20211117145056-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-20211117145056-2140 -n cert-expiration-20211117145056-2140: exit status 7 (167.763541ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:55:51.504842   12357 status.go:247] status error: host: state: unknown state "cert-expiration-20211117145056-2140": docker container inspect cert-expiration-20211117145056-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: cert-expiration-20211117145056-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-20211117145056-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "cert-expiration-20211117145056-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20211117145056-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20211117145056-2140: (6.434331871s)
--- FAIL: TestCertExpiration (301.24s)

                                                
                                    
x
+
TestDockerFlags (58.83s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20211117145016-2140 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-20211117145016-2140 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 80 (49.983064331s)

                                                
                                                
-- stdout --
	* [docker-flags-20211117145016-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node docker-flags-20211117145016-2140 in cluster docker-flags-20211117145016-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-20211117145016-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:50:16.593227   10529 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:50:16.593372   10529 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:50:16.593377   10529 out.go:310] Setting ErrFile to fd 2...
	I1117 14:50:16.593380   10529 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:50:16.593463   10529 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:50:16.593779   10529 out.go:304] Setting JSON to false
	I1117 14:50:16.626380   10529 start.go:112] hostinfo: {"hostname":"37310.local","uptime":2991,"bootTime":1637186425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:50:16.626548   10529 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:50:16.653617   10529 out.go:176] * [docker-flags-20211117145016-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:50:16.653763   10529 notify.go:174] Checking for updates...
	I1117 14:50:16.700369   10529 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:50:16.726290   10529 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:50:16.752404   10529 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:50:16.778451   10529 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:50:16.780175   10529 config.go:176] Loaded profile config "force-systemd-flag-20211117145006-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:50:16.780295   10529 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:50:16.780336   10529 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:50:16.900275   10529 docker.go:132] docker version: linux-20.10.6
	I1117 14:50:16.900399   10529 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:50:17.130849   10529 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 22:50:17.054213316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:50:17.157807   10529 out.go:176] * Using the docker driver based on user configuration
	I1117 14:50:17.157840   10529 start.go:280] selected driver: docker
	I1117 14:50:17.157847   10529 start.go:775] validating driver "docker" against <nil>
	I1117 14:50:17.157859   10529 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:50:17.160202   10529 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:50:17.366568   10529 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 22:50:17.295932251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:50:17.366663   10529 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:50:17.366783   10529 start_flags.go:753] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1117 14:50:17.366803   10529 cni.go:93] Creating CNI manager for ""
	I1117 14:50:17.366814   10529 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:50:17.366823   10529 start_flags.go:282] config:
	{Name:docker-flags-20211117145016-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:docker-flags-20211117145016-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:50:17.393665   10529 out.go:176] * Starting control plane node docker-flags-20211117145016-2140 in cluster docker-flags-20211117145016-2140
	I1117 14:50:17.393714   10529 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:50:17.419473   10529 out.go:176] * Pulling base image ...
	I1117 14:50:17.419528   10529 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:50:17.419604   10529 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:50:17.419643   10529 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:50:17.419684   10529 cache.go:57] Caching tarball of preloaded images
	I1117 14:50:17.419856   10529 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:50:17.419869   10529 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:50:17.420633   10529 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/docker-flags-20211117145016-2140/config.json ...
	I1117 14:50:17.420747   10529 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/docker-flags-20211117145016-2140/config.json: {Name:mk119442e7ab8650fd143ab802d88aab48781681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:50:17.579230   10529 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:50:17.579257   10529 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:50:17.579270   10529 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:50:17.579482   10529 start.go:313] acquiring machines lock for docker-flags-20211117145016-2140: {Name:mk953730392952201ab4a4cf79dcbfb05ccfe0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:50:17.579674   10529 start.go:317] acquired machines lock for "docker-flags-20211117145016-2140" in 177.838µs
	I1117 14:50:17.579711   10529 start.go:89] Provisioning new machine with config: &{Name:docker-flags-20211117145016-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:docker-flags-20211117145016-2140 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 14:50:17.579814   10529 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:50:17.606839   10529 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 14:50:17.607219   10529 start.go:160] libmachine.API.Create for "docker-flags-20211117145016-2140" (driver="docker")
	I1117 14:50:17.607277   10529 client.go:168] LocalClient.Create starting
	I1117 14:50:17.607425   10529 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:50:17.628299   10529 main.go:130] libmachine: Decoding PEM data...
	I1117 14:50:17.628333   10529 main.go:130] libmachine: Parsing certificate...
	I1117 14:50:17.628425   10529 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:50:17.628476   10529 main.go:130] libmachine: Decoding PEM data...
	I1117 14:50:17.628489   10529 main.go:130] libmachine: Parsing certificate...
	I1117 14:50:17.629254   10529 cli_runner.go:115] Run: docker network inspect docker-flags-20211117145016-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:50:17.785346   10529 cli_runner.go:162] docker network inspect docker-flags-20211117145016-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:50:17.785463   10529 network_create.go:254] running [docker network inspect docker-flags-20211117145016-2140] to gather additional debugging logs...
	I1117 14:50:17.785483   10529 cli_runner.go:115] Run: docker network inspect docker-flags-20211117145016-2140
	W1117 14:50:17.916461   10529 cli_runner.go:162] docker network inspect docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:50:17.916485   10529 network_create.go:257] error running [docker network inspect docker-flags-20211117145016-2140]: docker network inspect docker-flags-20211117145016-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20211117145016-2140
	I1117 14:50:17.916500   10529 network_create.go:259] output of [docker network inspect docker-flags-20211117145016-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20211117145016-2140
	
	** /stderr **
	I1117 14:50:17.916594   10529 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:50:18.045510   10529 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000f1a0] misses:0}
	I1117 14:50:18.045546   10529 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:18.045560   10529 network_create.go:106] attempt to create docker network docker-flags-20211117145016-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:50:18.045706   10529 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117145016-2140
	W1117 14:50:18.198441   10529 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117145016-2140 returned with exit code 1
	W1117 14:50:18.198504   10529 network_create.go:98] failed to create docker network docker-flags-20211117145016-2140 192.168.49.0/24, will retry: subnet is taken
	I1117 14:50:18.198771   10529 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000f1a0] amended:false}} dirty:map[] misses:0}
	I1117 14:50:18.198792   10529 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:18.199055   10529 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000f1a0] amended:true}} dirty:map[192.168.49.0:0xc00000f1a0 192.168.58.0:0xc000116148] misses:0}
	I1117 14:50:18.199071   10529 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:18.199093   10529 network_create.go:106] attempt to create docker network docker-flags-20211117145016-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:50:18.199196   10529 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117145016-2140
	I1117 14:50:22.270999   10529 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117145016-2140: (4.071705488s)
	I1117 14:50:22.271019   10529 network_create.go:90] docker network docker-flags-20211117145016-2140 192.168.58.0/24 created
	I1117 14:50:22.271037   10529 kic.go:106] calculated static IP "192.168.58.2" for the "docker-flags-20211117145016-2140" container
	I1117 14:50:22.271143   10529 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:50:22.381058   10529 cli_runner.go:115] Run: docker volume create docker-flags-20211117145016-2140 --label name.minikube.sigs.k8s.io=docker-flags-20211117145016-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:50:22.494005   10529 oci.go:102] Successfully created a docker volume docker-flags-20211117145016-2140
	I1117 14:50:22.494126   10529 cli_runner.go:115] Run: docker run --rm --name docker-flags-20211117145016-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20211117145016-2140 --entrypoint /usr/bin/test -v docker-flags-20211117145016-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:50:23.030678   10529 oci.go:106] Successfully prepared a docker volume docker-flags-20211117145016-2140
	E1117 14:50:23.030732   10529 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:50:23.030732   10529 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:50:23.030759   10529 client.go:171] LocalClient.Create took 5.42339924s
	I1117 14:50:23.030768   10529 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:50:23.030893   10529 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117145016-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:50:25.039599   10529 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:50:25.039724   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:50:25.194508   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:50:25.194596   10529 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:25.471128   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:50:25.605318   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:50:25.605392   10529 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:26.151258   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:50:26.276678   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:50:26.276761   10529 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:26.939579   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:50:27.082839   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	W1117 14:50:27.082940   10529 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	
	W1117 14:50:27.082964   10529 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:27.082974   10529 start.go:129] duration metric: createHost completed in 9.503028113s
	I1117 14:50:27.082980   10529 start.go:80] releasing machines lock for "docker-flags-20211117145016-2140", held for 9.503167963s
	W1117 14:50:27.082996   10529 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:50:27.083475   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:27.218023   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:27.218075   10529 delete.go:82] Unable to get host status for docker-flags-20211117145016-2140, assuming it has already been deleted: state: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	W1117 14:50:27.218220   10529 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:50:27.218240   10529 start.go:547] Will try again in 5 seconds ...
	I1117 14:50:28.641120   10529 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117145016-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.610083318s)
	I1117 14:50:28.641137   10529 kic.go:188] duration metric: took 5.610294 seconds to extract preloaded images to volume
	I1117 14:50:32.227772   10529 start.go:313] acquiring machines lock for docker-flags-20211117145016-2140: {Name:mk953730392952201ab4a4cf79dcbfb05ccfe0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:50:32.227928   10529 start.go:317] acquired machines lock for "docker-flags-20211117145016-2140" in 126.988µs
	I1117 14:50:32.227967   10529 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:50:32.227980   10529 fix.go:55] fixHost starting: 
	I1117 14:50:32.228443   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:32.348062   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:32.348101   10529 fix.go:108] recreateIfNeeded on docker-flags-20211117145016-2140: state= err=unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:32.348119   10529 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:50:32.375246   10529 out.go:176] * docker "docker-flags-20211117145016-2140" container is missing, will recreate.
	I1117 14:50:32.375330   10529 delete.go:124] DEMOLISHING docker-flags-20211117145016-2140 ...
	I1117 14:50:32.375572   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:32.489205   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:50:32.489244   10529 stop.go:75] unable to get state: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:32.489257   10529 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:32.489664   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:32.599461   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:32.599505   10529 delete.go:82] Unable to get host status for docker-flags-20211117145016-2140, assuming it has already been deleted: state: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:32.599611   10529 cli_runner.go:115] Run: docker container inspect -f {{.Id}} docker-flags-20211117145016-2140
	W1117 14:50:32.715935   10529 cli_runner.go:162] docker container inspect -f {{.Id}} docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:50:32.715977   10529 kic.go:360] could not find the container docker-flags-20211117145016-2140 to remove it. will try anyways
	I1117 14:50:32.716062   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:32.833275   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:50:32.833312   10529 oci.go:83] error getting container status, will try to delete anyways: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:32.833396   10529 cli_runner.go:115] Run: docker exec --privileged -t docker-flags-20211117145016-2140 /bin/bash -c "sudo init 0"
	W1117 14:50:32.942784   10529 cli_runner.go:162] docker exec --privileged -t docker-flags-20211117145016-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:50:32.942810   10529 oci.go:658] error shutdown docker-flags-20211117145016-2140: docker exec --privileged -t docker-flags-20211117145016-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:33.943332   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:34.058508   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:34.058555   10529 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:34.058565   10529 oci.go:672] temporary error: container docker-flags-20211117145016-2140 status is  but expect it to be exited
	I1117 14:50:34.058593   10529 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:34.521549   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:34.637201   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:34.637237   10529 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:34.637254   10529 oci.go:672] temporary error: container docker-flags-20211117145016-2140 status is  but expect it to be exited
	I1117 14:50:34.637275   10529 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:35.531359   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:35.647028   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:35.647067   10529 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:35.647076   10529 oci.go:672] temporary error: container docker-flags-20211117145016-2140 status is  but expect it to be exited
	I1117 14:50:35.647099   10529 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:36.289862   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:36.402968   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:36.403006   10529 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:36.403022   10529 oci.go:672] temporary error: container docker-flags-20211117145016-2140 status is  but expect it to be exited
	I1117 14:50:36.403044   10529 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:37.515864   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:37.638622   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:37.638666   10529 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:37.638678   10529 oci.go:672] temporary error: container docker-flags-20211117145016-2140 status is  but expect it to be exited
	I1117 14:50:37.638702   10529 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:39.156264   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:39.265946   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:39.265985   10529 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:39.265995   10529 oci.go:672] temporary error: container docker-flags-20211117145016-2140 status is  but expect it to be exited
	I1117 14:50:39.266015   10529 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:42.307766   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:42.415590   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:42.415636   10529 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:42.415646   10529 oci.go:672] temporary error: container docker-flags-20211117145016-2140 status is  but expect it to be exited
	I1117 14:50:42.415674   10529 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:48.206317   10529 cli_runner.go:115] Run: docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}
	W1117 14:50:48.328877   10529 cli_runner.go:162] docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:48.328943   10529 oci.go:670] temporary error verifying shutdown: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:50:48.328991   10529 oci.go:672] temporary error: container docker-flags-20211117145016-2140 status is  but expect it to be exited
	I1117 14:50:48.329039   10529 oci.go:87] couldn't shut down docker-flags-20211117145016-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	 
	I1117 14:50:48.329165   10529 cli_runner.go:115] Run: docker rm -f -v docker-flags-20211117145016-2140
	I1117 14:50:48.457320   10529 cli_runner.go:115] Run: docker container inspect -f {{.Id}} docker-flags-20211117145016-2140
	W1117 14:50:48.582572   10529 cli_runner.go:162] docker container inspect -f {{.Id}} docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:50:48.582690   10529 cli_runner.go:115] Run: docker network inspect docker-flags-20211117145016-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:50:48.706610   10529 cli_runner.go:115] Run: docker network rm docker-flags-20211117145016-2140
	I1117 14:50:52.846666   10529 cli_runner.go:168] Completed: docker network rm docker-flags-20211117145016-2140: (4.139959754s)
	W1117 14:50:52.846953   10529 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:50:52.846961   10529 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:50:53.856266   10529 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:50:53.890564   10529 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 14:50:53.890650   10529 start.go:160] libmachine.API.Create for "docker-flags-20211117145016-2140" (driver="docker")
	I1117 14:50:53.890674   10529 client.go:168] LocalClient.Create starting
	I1117 14:50:53.890776   10529 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:50:53.890828   10529 main.go:130] libmachine: Decoding PEM data...
	I1117 14:50:53.890854   10529 main.go:130] libmachine: Parsing certificate...
	I1117 14:50:53.890918   10529 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:50:53.911801   10529 main.go:130] libmachine: Decoding PEM data...
	I1117 14:50:53.911815   10529 main.go:130] libmachine: Parsing certificate...
	I1117 14:50:53.912259   10529 cli_runner.go:115] Run: docker network inspect docker-flags-20211117145016-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:50:54.023436   10529 cli_runner.go:162] docker network inspect docker-flags-20211117145016-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:50:54.023535   10529 network_create.go:254] running [docker network inspect docker-flags-20211117145016-2140] to gather additional debugging logs...
	I1117 14:50:54.023549   10529 cli_runner.go:115] Run: docker network inspect docker-flags-20211117145016-2140
	W1117 14:50:54.134851   10529 cli_runner.go:162] docker network inspect docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:50:54.134874   10529 network_create.go:257] error running [docker network inspect docker-flags-20211117145016-2140]: docker network inspect docker-flags-20211117145016-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: docker-flags-20211117145016-2140
	I1117 14:50:54.134885   10529 network_create.go:259] output of [docker network inspect docker-flags-20211117145016-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: docker-flags-20211117145016-2140
	
	** /stderr **
	I1117 14:50:54.134977   10529 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:50:54.246099   10529 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000f1a0] amended:true}} dirty:map[192.168.49.0:0xc00000f1a0 192.168.58.0:0xc000116148] misses:0}
	I1117 14:50:54.246128   10529 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:54.246307   10529 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000f1a0] amended:true}} dirty:map[192.168.49.0:0xc00000f1a0 192.168.58.0:0xc000116148] misses:1}
	I1117 14:50:54.246315   10529 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:54.246496   10529 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000f1a0] amended:true}} dirty:map[192.168.49.0:0xc00000f1a0 192.168.58.0:0xc000116148 192.168.67.0:0xc000116c30] misses:1}
	I1117 14:50:54.246512   10529 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:54.246521   10529 network_create.go:106] attempt to create docker network docker-flags-20211117145016-2140 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 14:50:54.246599   10529 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117145016-2140
	W1117 14:50:54.359333   10529 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117145016-2140 returned with exit code 1
	W1117 14:50:54.359383   10529 network_create.go:98] failed to create docker network docker-flags-20211117145016-2140 192.168.67.0/24, will retry: subnet is taken
	I1117 14:50:54.359622   10529 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000f1a0] amended:true}} dirty:map[192.168.49.0:0xc00000f1a0 192.168.58.0:0xc000116148 192.168.67.0:0xc000116c30] misses:2}
	I1117 14:50:54.359641   10529 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:54.359828   10529 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000f1a0] amended:true}} dirty:map[192.168.49.0:0xc00000f1a0 192.168.58.0:0xc000116148 192.168.67.0:0xc000116c30 192.168.76.0:0xc0002c0428] misses:2}
	I1117 14:50:54.359838   10529 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:54.359844   10529 network_create.go:106] attempt to create docker network docker-flags-20211117145016-2140 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1117 14:50:54.359943   10529 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117145016-2140
	I1117 14:51:00.443561   10529 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true docker-flags-20211117145016-2140: (6.083483934s)
	I1117 14:51:00.443584   10529 network_create.go:90] docker network docker-flags-20211117145016-2140 192.168.76.0/24 created
	I1117 14:51:00.443595   10529 kic.go:106] calculated static IP "192.168.76.2" for the "docker-flags-20211117145016-2140" container
	I1117 14:51:00.443699   10529 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:51:00.555026   10529 cli_runner.go:115] Run: docker volume create docker-flags-20211117145016-2140 --label name.minikube.sigs.k8s.io=docker-flags-20211117145016-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:51:00.667250   10529 oci.go:102] Successfully created a docker volume docker-flags-20211117145016-2140
	I1117 14:51:00.667403   10529 cli_runner.go:115] Run: docker run --rm --name docker-flags-20211117145016-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20211117145016-2140 --entrypoint /usr/bin/test -v docker-flags-20211117145016-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:51:01.109442   10529 oci.go:106] Successfully prepared a docker volume docker-flags-20211117145016-2140
	E1117 14:51:01.109501   10529 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:51:01.109511   10529 client.go:171] LocalClient.Create took 7.218735235s
	I1117 14:51:01.109524   10529 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:51:01.109544   10529 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:51:01.109687   10529 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20211117145016-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:51:03.109844   10529 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:51:03.109989   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:51:03.247384   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:51:03.247490   10529 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:51:03.427757   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:51:03.556547   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:51:03.556658   10529 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:51:03.887357   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:51:04.009313   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:51:04.009406   10529 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:51:04.477053   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:51:04.605868   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	W1117 14:51:04.605962   10529 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	
	W1117 14:51:04.605985   10529 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:51:04.605995   10529 start.go:129] duration metric: createHost completed in 10.749569881s
	I1117 14:51:04.606070   10529 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:51:04.606142   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:51:04.737362   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:51:04.737468   10529 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:51:04.933511   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:51:05.085666   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:51:05.085754   10529 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:51:05.383816   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:51:05.530449   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	I1117 14:51:05.530543   10529 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:51:06.193995   10529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140
	W1117 14:51:06.338051   10529 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140 returned with exit code 1
	W1117 14:51:06.338151   10529 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	
	W1117 14:51:06.338203   10529 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-20211117145016-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20211117145016-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	I1117 14:51:06.338263   10529 fix.go:57] fixHost completed within 34.109824779s
	I1117 14:51:06.338275   10529 start.go:80] releasing machines lock for "docker-flags-20211117145016-2140", held for 34.109876471s
	W1117 14:51:06.338417   10529 out.go:241] * Failed to start docker container. Running "minikube delete -p docker-flags-20211117145016-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p docker-flags-20211117145016-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:51:06.389402   10529 out.go:176] 
	W1117 14:51:06.389575   10529 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:51:06.389597   10529 out.go:241] * 
	* 
	W1117 14:51:06.390215   10529 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:51:06.462455   10529 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:48: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-20211117145016-2140 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20211117145016-2140 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-20211117145016-2140 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (361.221799ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:53: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-20211117145016-2140 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:58: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:58: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20211117145016-2140 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:62: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-20211117145016-2140 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (309.931758ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:64: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-20211117145016-2140 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:68: expected "out/minikube-darwin-amd64 -p docker-flags-20211117145016-2140 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:642: *** TestDockerFlags FAILED at 2021-11-17 14:51:07.205574 -0800 PST m=+1665.278379011
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-20211117145016-2140
helpers_test.go:235: (dbg) docker inspect docker-flags-20211117145016-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-20211117145016-2140",
	        "Id": "ec08ffe69d235616cf3e3e074f0635f5415e3207773d53191eeb79e8e34b9794",
	        "Created": "2021-11-17T22:50:54.47973092Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-20211117145016-2140 -n docker-flags-20211117145016-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-20211117145016-2140 -n docker-flags-20211117145016-2140: exit status 7 (160.718122ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:51:07.497054   11046 status.go:247] status error: host: state: unknown state "docker-flags-20211117145016-2140": docker container inspect docker-flags-20211117145016-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: docker-flags-20211117145016-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-20211117145016-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-20211117145016-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20211117145016-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20211117145016-2140: (7.879560949s)
--- FAIL: TestDockerFlags (58.83s)

                                                
                                    
x
+
TestForceSystemdFlag (49.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20211117145006-2140 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-20211117145006-2140 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 80 (44.966889267s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-20211117145006-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node force-systemd-flag-20211117145006-2140 in cluster force-systemd-flag-20211117145006-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-20211117145006-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:50:06.990382   10387 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:50:06.990529   10387 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:50:06.990535   10387 out.go:310] Setting ErrFile to fd 2...
	I1117 14:50:06.990538   10387 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:50:06.990618   10387 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:50:06.990927   10387 out.go:304] Setting JSON to false
	I1117 14:50:07.019775   10387 start.go:112] hostinfo: {"hostname":"37310.local","uptime":2981,"bootTime":1637186425,"procs":358,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:50:07.019992   10387 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:50:07.047165   10387 out.go:176] * [force-systemd-flag-20211117145006-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:50:07.047268   10387 notify.go:174] Checking for updates...
	I1117 14:50:07.072893   10387 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:50:07.098642   10387 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:50:07.124876   10387 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:50:07.150883   10387 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:50:07.151323   10387 config.go:176] Loaded profile config "force-systemd-env-20211117144925-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:50:07.151405   10387 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:50:07.151439   10387 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:50:07.271376   10387 docker.go:132] docker version: linux-20.10.6
	I1117 14:50:07.271491   10387 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:50:07.515372   10387 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 22:50:07.411501335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:50:07.542112   10387 out.go:176] * Using the docker driver based on user configuration
	I1117 14:50:07.542165   10387 start.go:280] selected driver: docker
	I1117 14:50:07.542171   10387 start.go:775] validating driver "docker" against <nil>
	I1117 14:50:07.542186   10387 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:50:07.544686   10387 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:50:07.746298   10387 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:58 SystemTime:2021-11-17 22:50:07.675940544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:50:07.746403   10387 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:50:07.746521   10387 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 14:50:07.746537   10387 cni.go:93] Creating CNI manager for ""
	I1117 14:50:07.746543   10387 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:50:07.746549   10387 start_flags.go:282] config:
	{Name:force-systemd-flag-20211117145006-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-flag-20211117145006-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:50:07.773263   10387 out.go:176] * Starting control plane node force-systemd-flag-20211117145006-2140 in cluster force-systemd-flag-20211117145006-2140
	I1117 14:50:07.773308   10387 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:50:07.819884   10387 out.go:176] * Pulling base image ...
	I1117 14:50:07.819922   10387 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:50:07.819960   10387 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:50:07.819970   10387 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:50:07.819976   10387 cache.go:57] Caching tarball of preloaded images
	I1117 14:50:07.820116   10387 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:50:07.820135   10387 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:50:07.820782   10387 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/force-systemd-flag-20211117145006-2140/config.json ...
	I1117 14:50:07.820861   10387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/force-systemd-flag-20211117145006-2140/config.json: {Name:mk1fc6330211c3f1c311b911953386f177520a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:50:07.957369   10387 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:50:07.957391   10387 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:50:07.957401   10387 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:50:07.957436   10387 start.go:313] acquiring machines lock for force-systemd-flag-20211117145006-2140: {Name:mk70e56447a95d35b00092c4a9610752682c8e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:50:07.957572   10387 start.go:317] acquired machines lock for "force-systemd-flag-20211117145006-2140" in 123.866µs
	I1117 14:50:07.957600   10387 start.go:89] Provisioning new machine with config: &{Name:force-systemd-flag-20211117145006-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-flag-20211117145006-2140 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 14:50:07.957653   10387 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:50:08.004712   10387 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 14:50:08.005021   10387 start.go:160] libmachine.API.Create for "force-systemd-flag-20211117145006-2140" (driver="docker")
	I1117 14:50:08.005062   10387 client.go:168] LocalClient.Create starting
	I1117 14:50:08.005242   10387 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:50:08.005319   10387 main.go:130] libmachine: Decoding PEM data...
	I1117 14:50:08.005361   10387 main.go:130] libmachine: Parsing certificate...
	I1117 14:50:08.005468   10387 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:50:08.005521   10387 main.go:130] libmachine: Decoding PEM data...
	I1117 14:50:08.005542   10387 main.go:130] libmachine: Parsing certificate...
	I1117 14:50:08.006535   10387 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117145006-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:50:08.128471   10387 cli_runner.go:162] docker network inspect force-systemd-flag-20211117145006-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:50:08.128577   10387 network_create.go:254] running [docker network inspect force-systemd-flag-20211117145006-2140] to gather additional debugging logs...
	I1117 14:50:08.128595   10387 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117145006-2140
	W1117 14:50:08.255435   10387 cli_runner.go:162] docker network inspect force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:08.255460   10387 network_create.go:257] error running [docker network inspect force-systemd-flag-20211117145006-2140]: docker network inspect force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20211117145006-2140
	I1117 14:50:08.255473   10387 network_create.go:259] output of [docker network inspect force-systemd-flag-20211117145006-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20211117145006-2140
	
	** /stderr **
	I1117 14:50:08.255584   10387 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:50:08.410422   10387 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000186218] misses:0}
	I1117 14:50:08.410454   10387 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:08.410470   10387 network_create.go:106] attempt to create docker network force-systemd-flag-20211117145006-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:50:08.410568   10387 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117145006-2140
	I1117 14:50:13.425743   10387 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117145006-2140: (5.015046101s)
	I1117 14:50:13.425772   10387 network_create.go:90] docker network force-systemd-flag-20211117145006-2140 192.168.49.0/24 created
	I1117 14:50:13.425790   10387 kic.go:106] calculated static IP "192.168.49.2" for the "force-systemd-flag-20211117145006-2140" container
	I1117 14:50:13.425900   10387 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:50:13.537973   10387 cli_runner.go:115] Run: docker volume create force-systemd-flag-20211117145006-2140 --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117145006-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:50:13.648249   10387 oci.go:102] Successfully created a docker volume force-systemd-flag-20211117145006-2140
	I1117 14:50:13.648433   10387 cli_runner.go:115] Run: docker run --rm --name force-systemd-flag-20211117145006-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117145006-2140 --entrypoint /usr/bin/test -v force-systemd-flag-20211117145006-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:50:14.154652   10387 oci.go:106] Successfully prepared a docker volume force-systemd-flag-20211117145006-2140
	E1117 14:50:14.154711   10387 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:50:14.154726   10387 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:50:14.154741   10387 client.go:171] LocalClient.Create took 6.149588238s
	I1117 14:50:14.154745   10387 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:50:14.154852   10387 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117145006-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:50:16.156023   10387 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:50:16.156101   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:16.319047   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:16.319158   10387 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:16.600979   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:16.860094   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:16.860202   10387 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:17.406664   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:17.550652   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	W1117 14:50:17.550742   10387 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	
	W1117 14:50:17.550774   10387 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:17.550784   10387 start.go:129] duration metric: createHost completed in 9.592991731s
	I1117 14:50:17.550800   10387 start.go:80] releasing machines lock for "force-systemd-flag-20211117145006-2140", held for 9.593082506s
	W1117 14:50:17.550819   10387 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:50:17.551320   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:17.717325   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:17.717395   10387 delete.go:82] Unable to get host status for force-systemd-flag-20211117145006-2140, assuming it has already been deleted: state: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	W1117 14:50:17.717577   10387 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:50:17.717593   10387 start.go:547] Will try again in 5 seconds ...
	I1117 14:50:20.495487   10387 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117145006-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.340501088s)
	I1117 14:50:20.495501   10387 kic.go:188] duration metric: took 6.340672 seconds to extract preloaded images to volume
	I1117 14:50:22.720872   10387 start.go:313] acquiring machines lock for force-systemd-flag-20211117145006-2140: {Name:mk70e56447a95d35b00092c4a9610752682c8e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:50:22.720976   10387 start.go:317] acquired machines lock for "force-systemd-flag-20211117145006-2140" in 84.771µs
	I1117 14:50:22.721004   10387 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:50:22.721011   10387 fix.go:55] fixHost starting: 
	I1117 14:50:22.721277   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:22.849136   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:22.849181   10387 fix.go:108] recreateIfNeeded on force-systemd-flag-20211117145006-2140: state= err=unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:22.849198   10387 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:50:22.877435   10387 out.go:176] * docker "force-systemd-flag-20211117145006-2140" container is missing, will recreate.
	I1117 14:50:22.877474   10387 delete.go:124] DEMOLISHING force-systemd-flag-20211117145006-2140 ...
	I1117 14:50:22.877586   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:22.997348   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:50:22.997390   10387 stop.go:75] unable to get state: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:22.997409   10387 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:22.997829   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:23.118250   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:23.118291   10387 delete.go:82] Unable to get host status for force-systemd-flag-20211117145006-2140, assuming it has already been deleted: state: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:23.118381   10387 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-flag-20211117145006-2140
	W1117 14:50:23.242636   10387 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:23.242667   10387 kic.go:360] could not find the container force-systemd-flag-20211117145006-2140 to remove it. will try anyways
	I1117 14:50:23.242761   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:23.363361   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:50:23.363398   10387 oci.go:83] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:23.363482   10387 cli_runner.go:115] Run: docker exec --privileged -t force-systemd-flag-20211117145006-2140 /bin/bash -c "sudo init 0"
	W1117 14:50:23.492490   10387 cli_runner.go:162] docker exec --privileged -t force-systemd-flag-20211117145006-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:50:23.492519   10387 oci.go:658] error shutdown force-systemd-flag-20211117145006-2140: docker exec --privileged -t force-systemd-flag-20211117145006-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:24.493004   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:24.625582   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:24.625650   10387 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:24.625681   10387 oci.go:672] temporary error: container force-systemd-flag-20211117145006-2140 status is  but expect it to be exited
	I1117 14:50:24.625705   10387 retry.go:31] will retry after 468.857094ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:25.095298   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:25.241676   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:25.241716   10387 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:25.241726   10387 oci.go:672] temporary error: container force-systemd-flag-20211117145006-2140 status is  but expect it to be exited
	I1117 14:50:25.241748   10387 retry.go:31] will retry after 693.478123ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:25.939494   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:26.067394   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:26.067437   10387 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:26.067457   10387 oci.go:672] temporary error: container force-systemd-flag-20211117145006-2140 status is  but expect it to be exited
	I1117 14:50:26.067479   10387 retry.go:31] will retry after 1.335175957s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:27.406162   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:27.548417   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:27.548474   10387 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:27.548485   10387 oci.go:672] temporary error: container force-systemd-flag-20211117145006-2140 status is  but expect it to be exited
	I1117 14:50:27.548518   10387 retry.go:31] will retry after 954.512469ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:28.509465   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:28.620937   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:28.620977   10387 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:28.620987   10387 oci.go:672] temporary error: container force-systemd-flag-20211117145006-2140 status is  but expect it to be exited
	I1117 14:50:28.621010   10387 retry.go:31] will retry after 1.661814363s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:30.289711   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:30.403009   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:30.403052   10387 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:30.403059   10387 oci.go:672] temporary error: container force-systemd-flag-20211117145006-2140 status is  but expect it to be exited
	I1117 14:50:30.403079   10387 retry.go:31] will retry after 2.266618642s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:32.669952   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:32.792467   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:32.792524   10387 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:32.792537   10387 oci.go:672] temporary error: container force-systemd-flag-20211117145006-2140 status is  but expect it to be exited
	I1117 14:50:32.792562   10387 retry.go:31] will retry after 4.561443331s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:37.356364   10387 cli_runner.go:115] Run: docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}
	W1117 14:50:37.471963   10387 cli_runner.go:162] docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:50:37.472001   10387 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:37.472023   10387 oci.go:672] temporary error: container force-systemd-flag-20211117145006-2140 status is  but expect it to be exited
	I1117 14:50:37.472050   10387 oci.go:87] couldn't shut down force-systemd-flag-20211117145006-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	 
	I1117 14:50:37.472128   10387 cli_runner.go:115] Run: docker rm -f -v force-systemd-flag-20211117145006-2140
	I1117 14:50:37.589445   10387 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-flag-20211117145006-2140
	W1117 14:50:37.705833   10387 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:37.705928   10387 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117145006-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:50:37.815051   10387 cli_runner.go:115] Run: docker network rm force-systemd-flag-20211117145006-2140
	I1117 14:50:40.555180   10387 cli_runner.go:168] Completed: docker network rm force-systemd-flag-20211117145006-2140: (2.740058834s)
	W1117 14:50:40.555447   10387 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:50:40.555454   10387 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:50:41.556228   10387 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:50:41.632052   10387 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 14:50:41.632190   10387 start.go:160] libmachine.API.Create for "force-systemd-flag-20211117145006-2140" (driver="docker")
	I1117 14:50:41.632224   10387 client.go:168] LocalClient.Create starting
	I1117 14:50:41.632403   10387 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:50:41.632484   10387 main.go:130] libmachine: Decoding PEM data...
	I1117 14:50:41.632509   10387 main.go:130] libmachine: Parsing certificate...
	I1117 14:50:41.632602   10387 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:50:41.632659   10387 main.go:130] libmachine: Decoding PEM data...
	I1117 14:50:41.632678   10387 main.go:130] libmachine: Parsing certificate...
	I1117 14:50:41.633715   10387 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117145006-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:50:41.747273   10387 cli_runner.go:162] docker network inspect force-systemd-flag-20211117145006-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:50:41.747376   10387 network_create.go:254] running [docker network inspect force-systemd-flag-20211117145006-2140] to gather additional debugging logs...
	I1117 14:50:41.747393   10387 cli_runner.go:115] Run: docker network inspect force-systemd-flag-20211117145006-2140
	W1117 14:50:41.858791   10387 cli_runner.go:162] docker network inspect force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:41.858815   10387 network_create.go:257] error running [docker network inspect force-systemd-flag-20211117145006-2140]: docker network inspect force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-flag-20211117145006-2140
	I1117 14:50:41.858827   10387 network_create.go:259] output of [docker network inspect force-systemd-flag-20211117145006-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-flag-20211117145006-2140
	
	** /stderr **
	I1117 14:50:41.858923   10387 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:50:41.991479   10387 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186218] amended:false}} dirty:map[] misses:0}
	I1117 14:50:41.991507   10387 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:41.991677   10387 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186218] amended:true}} dirty:map[192.168.49.0:0xc000186218 192.168.58.0:0xc00000e9e8] misses:0}
	I1117 14:50:41.991688   10387 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:41.991695   10387 network_create.go:106] attempt to create docker network force-systemd-flag-20211117145006-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:50:41.991769   10387 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117145006-2140
	W1117 14:50:42.101801   10387 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117145006-2140 returned with exit code 1
	W1117 14:50:42.101835   10387 network_create.go:98] failed to create docker network force-systemd-flag-20211117145006-2140 192.168.58.0/24, will retry: subnet is taken
	I1117 14:50:42.102031   10387 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186218] amended:true}} dirty:map[192.168.49.0:0xc000186218 192.168.58.0:0xc00000e9e8] misses:1}
	I1117 14:50:42.102049   10387 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:42.102212   10387 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000186218] amended:true}} dirty:map[192.168.49.0:0xc000186218 192.168.58.0:0xc00000e9e8 192.168.67.0:0xc000186040] misses:1}
	I1117 14:50:42.102224   10387 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:50:42.102231   10387 network_create.go:106] attempt to create docker network force-systemd-flag-20211117145006-2140 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 14:50:42.102302   10387 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117145006-2140
	I1117 14:50:45.928661   10387 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-flag-20211117145006-2140: (3.826255549s)
	I1117 14:50:45.928689   10387 network_create.go:90] docker network force-systemd-flag-20211117145006-2140 192.168.67.0/24 created
	I1117 14:50:45.928703   10387 kic.go:106] calculated static IP "192.168.67.2" for the "force-systemd-flag-20211117145006-2140" container
	I1117 14:50:45.928822   10387 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:50:46.038968   10387 cli_runner.go:115] Run: docker volume create force-systemd-flag-20211117145006-2140 --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117145006-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:50:46.152987   10387 oci.go:102] Successfully created a docker volume force-systemd-flag-20211117145006-2140
	I1117 14:50:46.153132   10387 cli_runner.go:115] Run: docker run --rm --name force-systemd-flag-20211117145006-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-20211117145006-2140 --entrypoint /usr/bin/test -v force-systemd-flag-20211117145006-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:50:46.619901   10387 oci.go:106] Successfully prepared a docker volume force-systemd-flag-20211117145006-2140
	E1117 14:50:46.619957   10387 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:50:46.619967   10387 client.go:171] LocalClient.Create took 4.987669014s
	I1117 14:50:46.619978   10387 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:50:46.619998   10387 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:50:46.620141   10387 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-20211117145006-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:50:48.625942   10387 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:50:48.626033   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:48.747455   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:48.747542   10387 retry.go:31] will retry after 328.409991ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:49.084858   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:49.218702   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:49.218788   10387 retry.go:31] will retry after 267.848952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:49.487331   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:49.614594   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:49.614674   10387 retry.go:31] will retry after 495.369669ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:50.110694   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:50.240754   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	W1117 14:50:50.240885   10387 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	
	W1117 14:50:50.240921   10387 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:50.240931   10387 start.go:129] duration metric: createHost completed in 8.684509302s
	I1117 14:50:50.241013   10387 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:50:50.241085   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:50.370884   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:50.370961   10387 retry.go:31] will retry after 242.222461ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:50.613461   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:50.744176   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:50.744257   10387 retry.go:31] will retry after 293.637806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:51.038309   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:51.170080   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	I1117 14:50:51.170161   10387 retry.go:31] will retry after 446.119795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:51.626373   10387 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140
	W1117 14:50:51.756044   10387 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140 returned with exit code 1
	W1117 14:50:51.756122   10387 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	
	W1117 14:50:51.756158   10387 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-20211117145006-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-20211117145006-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	I1117 14:50:51.756177   10387 fix.go:57] fixHost completed within 29.034775464s
	I1117 14:50:51.756188   10387 start.go:80] releasing machines lock for "force-systemd-flag-20211117145006-2140", held for 29.034812698s
	W1117 14:50:51.756325   10387 out.go:241] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-20211117145006-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-20211117145006-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:50:51.804975   10387 out.go:176] 
	W1117 14:50:51.805111   10387 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:50:51.805125   10387 out.go:241] * 
	* 
	W1117 14:50:51.805768   10387 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:50:51.889554   10387 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:88: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-20211117145006-2140 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20211117145006-2140 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:105: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-20211117145006-2140 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (283.52608ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-20211117145006-2140 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:101: *** TestForceSystemdFlag FAILED at 2021-11-17 14:50:52.201849 -0800 PST m=+1650.274855547
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-20211117145006-2140
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-20211117145006-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-20211117145006-2140",
	        "Id": "fab2079f414cbf0044135267a1ee58a64aca98a8d16d12a865dd225ee21428f4",
	        "Created": "2021-11-17T22:50:42.212755857Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-20211117145006-2140 -n force-systemd-flag-20211117145006-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-20211117145006-2140 -n force-systemd-flag-20211117145006-2140: exit status 7 (151.722461ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:50:52.464664   10854 status.go:247] status error: host: state: unknown state "force-systemd-flag-20211117145006-2140": docker container inspect force-systemd-flag-20211117145006-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-flag-20211117145006-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-20211117145006-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-20211117145006-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20211117145006-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20211117145006-2140: (4.234582017s)
--- FAIL: TestForceSystemdFlag (49.76s)

                                                
                                    
x
+
TestForceSystemdEnv (51.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20211117144925-2140 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-20211117144925-2140 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 80 (44.293298961s)

                                                
                                                
-- stdout --
	* [force-systemd-env-20211117144925-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Starting control plane node force-systemd-env-20211117144925-2140 in cluster force-systemd-env-20211117144925-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-20211117144925-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:49:25.147950    9991 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:49:25.148094    9991 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:49:25.148100    9991 out.go:310] Setting ErrFile to fd 2...
	I1117 14:49:25.148103    9991 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:49:25.148175    9991 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:49:25.148486    9991 out.go:304] Setting JSON to false
	I1117 14:49:25.173602    9991 start.go:112] hostinfo: {"hostname":"37310.local","uptime":2940,"bootTime":1637186425,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:49:25.173696    9991 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:49:25.201067    9991 out.go:176] * [force-systemd-env-20211117144925-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:49:25.201254    9991 notify.go:174] Checking for updates...
	I1117 14:49:25.248676    9991 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:49:25.274311    9991 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:49:25.300442    9991 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:49:25.326419    9991 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:49:25.352568    9991 out.go:176]   - MINIKUBE_FORCE_SYSTEMD=true
	I1117 14:49:25.353472    9991 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:49:25.353666    9991 config.go:176] Loaded profile config "offline-docker-20211117144907-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:49:25.353729    9991 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:49:25.449630    9991 docker.go:132] docker version: linux-20.10.6
	I1117 14:49:25.449774    9991 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:49:25.630681    9991 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:49:25.573339178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:49:25.657960    9991 out.go:176] * Using the docker driver based on user configuration
	I1117 14:49:25.658010    9991 start.go:280] selected driver: docker
	I1117 14:49:25.658019    9991 start.go:775] validating driver "docker" against <nil>
	I1117 14:49:25.658041    9991 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:49:25.661400    9991 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:49:25.841414    9991 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:49:25.78529037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:49:25.841522    9991 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:49:25.841634    9991 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 14:49:25.841650    9991 cni.go:93] Creating CNI manager for ""
	I1117 14:49:25.841655    9991 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:49:25.841664    9991 start_flags.go:282] config:
	{Name:force-systemd-env-20211117144925-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-env-20211117144925-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:49:25.868497    9991 out.go:176] * Starting control plane node force-systemd-env-20211117144925-2140 in cluster force-systemd-env-20211117144925-2140
	I1117 14:49:25.868622    9991 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:49:25.894214    9991 out.go:176] * Pulling base image ...
	I1117 14:49:25.894330    9991 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:49:25.894398    9991 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:49:25.894447    9991 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:49:25.894481    9991 cache.go:57] Caching tarball of preloaded images
	I1117 14:49:25.895282    9991 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:49:25.895501    9991 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:49:25.896131    9991 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/force-systemd-env-20211117144925-2140/config.json ...
	I1117 14:49:25.896429    9991 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/force-systemd-env-20211117144925-2140/config.json: {Name:mk6b1b2c9870213bfb16a6fa0ae1cbbce771e528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:49:26.013134    9991 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:49:26.013157    9991 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:49:26.013169    9991 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:49:26.013211    9991 start.go:313] acquiring machines lock for force-systemd-env-20211117144925-2140: {Name:mk6829b3fafde247e1bdc01a3f7ee54088d743a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:49:26.013353    9991 start.go:317] acquired machines lock for "force-systemd-env-20211117144925-2140" in 129.782µs
	I1117 14:49:26.013382    9991 start.go:89] Provisioning new machine with config: &{Name:force-systemd-env-20211117144925-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:force-systemd-env-20211117144925-2140 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 14:49:26.013457    9991 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:49:26.062000    9991 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 14:49:26.062344    9991 start.go:160] libmachine.API.Create for "force-systemd-env-20211117144925-2140" (driver="docker")
	I1117 14:49:26.062400    9991 client.go:168] LocalClient.Create starting
	I1117 14:49:26.062571    9991 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:49:26.062650    9991 main.go:130] libmachine: Decoding PEM data...
	I1117 14:49:26.062682    9991 main.go:130] libmachine: Parsing certificate...
	I1117 14:49:26.062805    9991 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:49:26.062860    9991 main.go:130] libmachine: Decoding PEM data...
	I1117 14:49:26.062883    9991 main.go:130] libmachine: Parsing certificate...
	I1117 14:49:26.063976    9991 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117144925-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:49:26.173340    9991 cli_runner.go:162] docker network inspect force-systemd-env-20211117144925-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:49:26.173447    9991 network_create.go:254] running [docker network inspect force-systemd-env-20211117144925-2140] to gather additional debugging logs...
	I1117 14:49:26.173465    9991 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117144925-2140
	W1117 14:49:26.285139    9991 cli_runner.go:162] docker network inspect force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:49:26.285167    9991 network_create.go:257] error running [docker network inspect force-systemd-env-20211117144925-2140]: docker network inspect force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20211117144925-2140
	I1117 14:49:26.285182    9991 network_create.go:259] output of [docker network inspect force-systemd-env-20211117144925-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20211117144925-2140
	
	** /stderr **
	I1117 14:49:26.285280    9991 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:49:26.401440    9991 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00065a5e0] misses:0}
	I1117 14:49:26.401474    9991 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:26.401489    9991 network_create.go:106] attempt to create docker network force-systemd-env-20211117144925-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:49:26.401575    9991 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117144925-2140
	W1117 14:49:26.513151    9991 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117144925-2140 returned with exit code 1
	W1117 14:49:26.513187    9991 network_create.go:98] failed to create docker network force-systemd-env-20211117144925-2140 192.168.49.0/24, will retry: subnet is taken
	I1117 14:49:26.513399    9991 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065a5e0] amended:false}} dirty:map[] misses:0}
	I1117 14:49:26.513414    9991 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:26.513605    9991 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065a5e0] amended:true}} dirty:map[192.168.49.0:0xc00065a5e0 192.168.58.0:0xc0006ce190] misses:0}
	I1117 14:49:26.513617    9991 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:26.513623    9991 network_create.go:106] attempt to create docker network force-systemd-env-20211117144925-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:49:26.513704    9991 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117144925-2140
	I1117 14:49:30.411070    9991 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117144925-2140: (3.897284986s)
	I1117 14:49:30.411091    9991 network_create.go:90] docker network force-systemd-env-20211117144925-2140 192.168.58.0/24 created
	I1117 14:49:30.411106    9991 kic.go:106] calculated static IP "192.168.58.2" for the "force-systemd-env-20211117144925-2140" container
	I1117 14:49:30.411216    9991 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:49:30.520044    9991 cli_runner.go:115] Run: docker volume create force-systemd-env-20211117144925-2140 --label name.minikube.sigs.k8s.io=force-systemd-env-20211117144925-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:49:30.630311    9991 oci.go:102] Successfully created a docker volume force-systemd-env-20211117144925-2140
	I1117 14:49:30.630420    9991 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20211117144925-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211117144925-2140 --entrypoint /usr/bin/test -v force-systemd-env-20211117144925-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:49:31.123948    9991 oci.go:106] Successfully prepared a docker volume force-systemd-env-20211117144925-2140
	E1117 14:49:31.123999    9991 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:49:31.124007    9991 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:49:31.124030    9991 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:49:31.124032    9991 client.go:171] LocalClient.Create took 5.061555284s
	I1117 14:49:31.124120    9991 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117144925-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:49:33.130314    9991 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:49:33.130415    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:49:33.258293    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:49:33.258382    9991 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:33.539773    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:49:33.660855    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:49:33.660937    9991 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:34.206044    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:49:34.328277    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:49:34.328351    9991 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:34.989615    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:49:35.146220    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	W1117 14:49:35.146305    9991 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	
	W1117 14:49:35.146332    9991 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:35.146349    9991 start.go:129] duration metric: createHost completed in 9.132762301s
	I1117 14:49:35.146357    9991 start.go:80] releasing machines lock for "force-systemd-env-20211117144925-2140", held for 9.132872265s
	W1117 14:49:35.146374    9991 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:49:35.146853    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:35.283559    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:35.283648    9991 delete.go:82] Unable to get host status for force-systemd-env-20211117144925-2140, assuming it has already been deleted: state: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	W1117 14:49:35.283815    9991 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:49:35.283829    9991 start.go:547] Will try again in 5 seconds ...
	I1117 14:49:37.503822    9991 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117144925-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.379576008s)
	I1117 14:49:37.503837    9991 kic.go:188] duration metric: took 6.379722 seconds to extract preloaded images to volume
	I1117 14:49:40.288928    9991 start.go:313] acquiring machines lock for force-systemd-env-20211117144925-2140: {Name:mk6829b3fafde247e1bdc01a3f7ee54088d743a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:49:40.289086    9991 start.go:317] acquired machines lock for "force-systemd-env-20211117144925-2140" in 130.897µs
	I1117 14:49:40.289124    9991 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:49:40.289136    9991 fix.go:55] fixHost starting: 
	I1117 14:49:40.289604    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:40.402750    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:40.402789    9991 fix.go:108] recreateIfNeeded on force-systemd-env-20211117144925-2140: state= err=unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:40.402802    9991 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:49:40.428636    9991 out.go:176] * docker "force-systemd-env-20211117144925-2140" container is missing, will recreate.
	I1117 14:49:40.428650    9991 delete.go:124] DEMOLISHING force-systemd-env-20211117144925-2140 ...
	I1117 14:49:40.428772    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:40.536866    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:49:40.536918    9991 stop.go:75] unable to get state: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:40.536929    9991 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:40.537356    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:40.647170    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:40.647216    9991 delete.go:82] Unable to get host status for force-systemd-env-20211117144925-2140, assuming it has already been deleted: state: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:40.647313    9991 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-env-20211117144925-2140
	W1117 14:49:40.759169    9991 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:49:40.759194    9991 kic.go:360] could not find the container force-systemd-env-20211117144925-2140 to remove it. will try anyways
	I1117 14:49:40.759287    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:40.868754    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:49:40.868789    9991 oci.go:83] error getting container status, will try to delete anyways: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:40.868877    9991 cli_runner.go:115] Run: docker exec --privileged -t force-systemd-env-20211117144925-2140 /bin/bash -c "sudo init 0"
	W1117 14:49:40.975761    9991 cli_runner.go:162] docker exec --privileged -t force-systemd-env-20211117144925-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:49:40.975786    9991 oci.go:658] error shutdown force-systemd-env-20211117144925-2140: docker exec --privileged -t force-systemd-env-20211117144925-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:41.981099    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:42.091197    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:42.091235    9991 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:42.091243    9991 oci.go:672] temporary error: container force-systemd-env-20211117144925-2140 status is  but expect it to be exited
	I1117 14:49:42.091265    9991 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:42.555651    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:42.667954    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:42.667990    9991 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:42.667996    9991 oci.go:672] temporary error: container force-systemd-env-20211117144925-2140 status is  but expect it to be exited
	I1117 14:49:42.668020    9991 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:43.561599    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:43.678154    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:43.678195    9991 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:43.678203    9991 oci.go:672] temporary error: container force-systemd-env-20211117144925-2140 status is  but expect it to be exited
	I1117 14:49:43.678228    9991 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:44.322684    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:44.436246    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:44.436289    9991 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:44.436297    9991 oci.go:672] temporary error: container force-systemd-env-20211117144925-2140 status is  but expect it to be exited
	I1117 14:49:44.436320    9991 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:45.544431    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:45.668405    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:45.668476    9991 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:45.668498    9991 oci.go:672] temporary error: container force-systemd-env-20211117144925-2140 status is  but expect it to be exited
	I1117 14:49:45.668541    9991 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:47.186314    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:47.296149    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:47.296187    9991 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:47.296194    9991 oci.go:672] temporary error: container force-systemd-env-20211117144925-2140 status is  but expect it to be exited
	I1117 14:49:47.296216    9991 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:50.338657    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:50.472184    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:50.472229    9991 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:50.472241    9991 oci.go:672] temporary error: container force-systemd-env-20211117144925-2140 status is  but expect it to be exited
	I1117 14:49:50.472282    9991 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:56.255717    9991 cli_runner.go:115] Run: docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}
	W1117 14:49:56.365998    9991 cli_runner.go:162] docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:49:56.366034    9991 oci.go:670] temporary error verifying shutdown: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:49:56.366041    9991 oci.go:672] temporary error: container force-systemd-env-20211117144925-2140 status is  but expect it to be exited
	I1117 14:49:56.366067    9991 oci.go:87] couldn't shut down force-systemd-env-20211117144925-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	 
	I1117 14:49:56.366142    9991 cli_runner.go:115] Run: docker rm -f -v force-systemd-env-20211117144925-2140
	I1117 14:49:56.484525    9991 cli_runner.go:115] Run: docker container inspect -f {{.Id}} force-systemd-env-20211117144925-2140
	W1117 14:49:56.601095    9991 cli_runner.go:162] docker container inspect -f {{.Id}} force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:49:56.601210    9991 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117144925-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:49:56.716027    9991 cli_runner.go:162] docker network inspect force-systemd-env-20211117144925-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:49:56.716129    9991 network_create.go:254] running [docker network inspect force-systemd-env-20211117144925-2140] to gather additional debugging logs...
	I1117 14:49:56.716146    9991 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117144925-2140
	W1117 14:49:56.831194    9991 cli_runner.go:162] docker network inspect force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:49:56.831223    9991 network_create.go:257] error running [docker network inspect force-systemd-env-20211117144925-2140]: docker network inspect force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20211117144925-2140
	I1117 14:49:56.831236    9991 network_create.go:259] output of [docker network inspect force-systemd-env-20211117144925-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20211117144925-2140
	
	** /stderr **
	W1117 14:49:56.831484    9991 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:49:56.831490    9991 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:49:57.839047    9991 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:49:57.865884    9991 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1117 14:49:57.865986    9991 start.go:160] libmachine.API.Create for "force-systemd-env-20211117144925-2140" (driver="docker")
	I1117 14:49:57.866010    9991 client.go:168] LocalClient.Create starting
	I1117 14:49:57.866147    9991 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:49:57.866191    9991 main.go:130] libmachine: Decoding PEM data...
	I1117 14:49:57.866204    9991 main.go:130] libmachine: Parsing certificate...
	I1117 14:49:57.866268    9991 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:49:57.866305    9991 main.go:130] libmachine: Decoding PEM data...
	I1117 14:49:57.866316    9991 main.go:130] libmachine: Parsing certificate...
	I1117 14:49:57.887075    9991 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117144925-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:49:58.001638    9991 cli_runner.go:162] docker network inspect force-systemd-env-20211117144925-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:49:58.001733    9991 network_create.go:254] running [docker network inspect force-systemd-env-20211117144925-2140] to gather additional debugging logs...
	I1117 14:49:58.001751    9991 cli_runner.go:115] Run: docker network inspect force-systemd-env-20211117144925-2140
	W1117 14:49:58.113360    9991 cli_runner.go:162] docker network inspect force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:49:58.113386    9991 network_create.go:257] error running [docker network inspect force-systemd-env-20211117144925-2140]: docker network inspect force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: force-systemd-env-20211117144925-2140
	I1117 14:49:58.113404    9991 network_create.go:259] output of [docker network inspect force-systemd-env-20211117144925-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: force-systemd-env-20211117144925-2140
	
	** /stderr **
	I1117 14:49:58.113491    9991 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:49:58.227919    9991 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065a5e0] amended:true}} dirty:map[192.168.49.0:0xc00065a5e0 192.168.58.0:0xc0006ce190] misses:0}
	I1117 14:49:58.227949    9991 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:58.228121    9991 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065a5e0] amended:true}} dirty:map[192.168.49.0:0xc00065a5e0 192.168.58.0:0xc0006ce190] misses:1}
	I1117 14:49:58.228130    9991 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:58.228301    9991 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065a5e0] amended:true}} dirty:map[192.168.49.0:0xc00065a5e0 192.168.58.0:0xc0006ce190 192.168.67.0:0xc0006ce0f8] misses:1}
	I1117 14:49:58.228312    9991 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:58.228318    9991 network_create.go:106] attempt to create docker network force-systemd-env-20211117144925-2140 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1117 14:49:58.228404    9991 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117144925-2140
	W1117 14:49:58.344488    9991 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117144925-2140 returned with exit code 1
	W1117 14:49:58.344536    9991 network_create.go:98] failed to create docker network force-systemd-env-20211117144925-2140 192.168.67.0/24, will retry: subnet is taken
	I1117 14:49:58.344760    9991 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065a5e0] amended:true}} dirty:map[192.168.49.0:0xc00065a5e0 192.168.58.0:0xc0006ce190 192.168.67.0:0xc0006ce0f8] misses:2}
	I1117 14:49:58.344782    9991 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:58.344948    9991 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00065a5e0] amended:true}} dirty:map[192.168.49.0:0xc00065a5e0 192.168.58.0:0xc0006ce190 192.168.67.0:0xc0006ce0f8 192.168.76.0:0xc000112100] misses:2}
	I1117 14:49:58.344959    9991 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:49:58.344969    9991 network_create.go:106] attempt to create docker network force-systemd-env-20211117144925-2140 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1117 14:49:58.345050    9991 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117144925-2140
	I1117 14:50:03.233474    9991 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true force-systemd-env-20211117144925-2140: (4.888325032s)
	I1117 14:50:03.233496    9991 network_create.go:90] docker network force-systemd-env-20211117144925-2140 192.168.76.0/24 created
	I1117 14:50:03.233509    9991 kic.go:106] calculated static IP "192.168.76.2" for the "force-systemd-env-20211117144925-2140" container
	I1117 14:50:03.233610    9991 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:50:03.346617    9991 cli_runner.go:115] Run: docker volume create force-systemd-env-20211117144925-2140 --label name.minikube.sigs.k8s.io=force-systemd-env-20211117144925-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:50:03.459454    9991 oci.go:102] Successfully created a docker volume force-systemd-env-20211117144925-2140
	I1117 14:50:03.459600    9991 cli_runner.go:115] Run: docker run --rm --name force-systemd-env-20211117144925-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-20211117144925-2140 --entrypoint /usr/bin/test -v force-systemd-env-20211117144925-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:50:03.887704    9991 oci.go:106] Successfully prepared a docker volume force-systemd-env-20211117144925-2140
	E1117 14:50:03.887748    9991 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:50:03.887757    9991 client.go:171] LocalClient.Create took 6.021661128s
	I1117 14:50:03.887767    9991 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:50:03.887785    9991 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:50:03.887909    9991 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-20211117144925-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:50:05.888352    9991 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:50:05.888472    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:50:06.035740    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:50:06.035945    9991 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:50:06.214750    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:50:06.352092    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:50:06.352172    9991 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:50:06.682925    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:50:06.832676    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:50:06.832760    9991 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:50:07.298844    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:50:07.453728    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	W1117 14:50:07.453864    9991 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	
	W1117 14:50:07.453895    9991 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:50:07.453907    9991 start.go:129] duration metric: createHost completed in 9.614681786s
	I1117 14:50:07.453984    9991 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:50:07.454053    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:50:07.619497    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:50:07.619611    9991 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:50:07.815957    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:50:07.948589    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:50:07.948662    9991 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:50:08.255742    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:50:08.410151    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	I1117 14:50:08.410240    9991 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:50:09.079795    9991 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140
	W1117 14:50:09.213016    9991 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140 returned with exit code 1
	W1117 14:50:09.213108    9991 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	
	W1117 14:50:09.213138    9991 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-20211117144925-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-20211117144925-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	I1117 14:50:09.213153    9991 fix.go:57] fixHost completed within 28.923628702s
	I1117 14:50:09.213165    9991 start.go:80] releasing machines lock for "force-systemd-env-20211117144925-2140", held for 28.923679133s
	W1117 14:50:09.213321    9991 out.go:241] * Failed to start docker container. Running "minikube delete -p force-systemd-env-20211117144925-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-20211117144925-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:50:09.276825    9991 out.go:176] 
	W1117 14:50:09.276964    9991 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:50:09.276976    9991 out.go:241] * 
	* 
	W1117 14:50:09.277529    9991 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:50:09.402564    9991 out.go:176] 

                                                
                                                
** /stderr **
docker_test.go:153: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-20211117144925-2140 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 80
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20211117144925-2140 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:105: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-20211117144925-2140 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (326.69208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:107: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-20211117144925-2140 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:162: *** TestForceSystemdEnv FAILED at 2021-11-17 14:50:09.739275 -0800 PST m=+1607.812852720
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-20211117144925-2140
helpers_test.go:235: (dbg) docker inspect force-systemd-env-20211117144925-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-20211117144925-2140",
	        "Id": "674e23dc7508b6ada0a227ff5c7f53081bd858f43dabd2535e9e4350d2fb3cc6",
	        "Created": "2021-11-17T22:49:58.471469542Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-20211117144925-2140 -n force-systemd-env-20211117144925-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-20211117144925-2140 -n force-systemd-env-20211117144925-2140: exit status 7 (190.14615ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:50:10.062027   10460 status.go:247] status error: host: state: unknown state "force-systemd-env-20211117144925-2140": docker container inspect force-systemd-env-20211117144925-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: force-systemd-env-20211117144925-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-20211117144925-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-20211117144925-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20211117144925-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20211117144925-2140: (6.477703126s)
--- FAIL: TestForceSystemdEnv (51.43s)

                                                
                                    
x
+
TestErrorSpam/setup (45.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20211117142510-2140 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 --driver=docker 
error_spam_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p nospam-20211117142510-2140 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 --driver=docker : exit status 80 (45.628419294s)

                                                
                                                
-- stdout --
	* [nospam-20211117142510-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node nospam-20211117142510-2140 in cluster nospam-20211117142510-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	* docker "nospam-20211117142510-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2250MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:25:16.349394    2681 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:25:50.596061    2681 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p nospam-20211117142510-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:81: "out/minikube-darwin-amd64 start -p nospam-20211117142510-2140 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 --driver=docker " failed: exit status 80
error_spam_test.go:94: unexpected stderr: "E1117 14:25:16.349394    2681 oci.go:197] error getting kernel modules path: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "E1117 14:25:50.596061    2681 oci.go:197] error getting kernel modules path: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "* Failed to start docker container. Running \"minikube delete -p nospam-20211117142510-2140\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
error_spam_test.go:94: unexpected stderr: "* "
error_spam_test.go:94: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:94: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:94: unexpected stderr: "│                                                                                             │"
error_spam_test.go:94: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:108: minikube stdout:
* [nospam-20211117142510-2140] minikube v1.24.0 on Darwin 11.2.3
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
* Using the docker driver based on user configuration
* Starting control plane node nospam-20211117142510-2140 in cluster nospam-20211117142510-2140
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* docker "nospam-20211117142510-2140" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2250MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:109: minikube stderr:
E1117 14:25:16.349394    2681 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
E1117 14:25:50.596061    2681 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
* Failed to start docker container. Running "minikube delete -p nospam-20211117142510-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:119: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:119: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:119: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (45.63s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2015: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2015: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : exit status 80 (45.744259449s)

                                                
                                                
-- stdout --
	* [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node functional-20211117142648-2140 in cluster functional-20211117142648-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117142648-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51095 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51095 to docker env.
	E1117 14:26:54.763127    3149 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51095 to docker env.
	E1117 14:27:29.365697    3149 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117142648-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2017: failed minikube start. args "out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker ": exit status 80
functional_test.go:2022: start stdout=* [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
* Using the docker driver based on user configuration
* Starting control plane node functional-20211117142648-2140 in cluster functional-20211117142648-2140
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=4000MB) ...
* docker "functional-20211117142648-2140" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=4000MB) ...

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2027: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51095 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51095 to docker env.
E1117 14:26:54.763127    3149 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
! Local proxy ignored: not passing HTTP_PROXY=localhost:51095 to docker env.
E1117 14:27:29.365697    3149 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
* Failed to start docker container. Running "minikube delete -p functional-20211117142648-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "bd02cb16baf8299496d8ca31cc48de4704b41642b4901b88a0ac7a0c22587183",
	        "Created": "2021-11-17T22:27:24.472887691Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (156.701757ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:27:35.306619    3376 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/StartWithProxy (46.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
functional_test.go:579: audit.json does not contain the profile "functional-20211117142648-2140"
--- FAIL: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (69.62s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:600: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --alsologtostderr -v=8
functional_test.go:600: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --alsologtostderr -v=8: exit status 80 (1m9.334501093s)

                                                
                                                
-- stdout --
	* [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20211117142648-2140 in cluster functional-20211117142648-2140
	* Pulling base image ...
	* docker "functional-20211117142648-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117142648-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:27:35.346990    3381 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:27:35.347117    3381 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:27:35.347124    3381 out.go:310] Setting ErrFile to fd 2...
	I1117 14:27:35.347127    3381 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:27:35.347203    3381 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:27:35.347462    3381 out.go:304] Setting JSON to false
	I1117 14:27:35.372217    3381 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1630,"bootTime":1637186425,"procs":344,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:27:35.372312    3381 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:27:35.416416    3381 out.go:176] * [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:27:35.416548    3381 notify.go:174] Checking for updates...
	I1117 14:27:35.500669    3381 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:27:35.591326    3381 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:27:35.679624    3381 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:27:35.720090    3381 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:27:35.720769    3381 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:27:35.720829    3381 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:27:35.813102    3381 docker.go:132] docker version: linux-20.10.6
	I1117 14:27:35.813243    3381 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:27:35.992979    3381 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:27:35.942167467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:27:36.039310    3381 out.go:176] * Using the docker driver based on existing profile
	I1117 14:27:36.039334    3381 start.go:280] selected driver: docker
	I1117 14:27:36.039341    3381 start.go:775] validating driver "docker" against &{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:27:36.039395    3381 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:27:36.039604    3381 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:27:36.216108    3381 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:27:36.144565339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:27:36.218102    3381 cni.go:93] Creating CNI manager for ""
	I1117 14:27:36.218123    3381 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:27:36.218137    3381 start_flags.go:282] config:
	{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:27:36.244988    3381 out.go:176] * Starting control plane node functional-20211117142648-2140 in cluster functional-20211117142648-2140
	I1117 14:27:36.245025    3381 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:27:36.270851    3381 out.go:176] * Pulling base image ...
	I1117 14:27:36.270909    3381 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:27:36.271002    3381 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:27:36.271007    3381 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:27:36.271033    3381 cache.go:57] Caching tarball of preloaded images
	I1117 14:27:36.271262    3381 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:27:36.271294    3381 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:27:36.272165    3381 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/functional-20211117142648-2140/config.json ...
	I1117 14:27:36.389123    3381 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:27:36.389139    3381 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:27:36.389149    3381 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:27:36.389194    3381 start.go:313] acquiring machines lock for functional-20211117142648-2140: {Name:mk0ffa36ccb8092a6f2338223436899c154ee29e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:27:36.389273    3381 start.go:317] acquired machines lock for "functional-20211117142648-2140" in 60.628µs
	I1117 14:27:36.389294    3381 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:27:36.389299    3381 fix.go:55] fixHost starting: 
	I1117 14:27:36.389550    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:36.496885    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:36.496939    3381 fix.go:108] recreateIfNeeded on functional-20211117142648-2140: state= err=unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:36.496956    3381 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:27:36.523760    3381 out.go:176] * docker "functional-20211117142648-2140" container is missing, will recreate.
	I1117 14:27:36.523817    3381 delete.go:124] DEMOLISHING functional-20211117142648-2140 ...
	I1117 14:27:36.524041    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:36.633142    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:27:36.633179    3381 stop.go:75] unable to get state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:36.633193    3381 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:36.633608    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:36.739073    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:36.739117    3381 delete.go:82] Unable to get host status for functional-20211117142648-2140, assuming it has already been deleted: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:36.739213    3381 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
	W1117 14:27:36.845970    3381 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
	I1117 14:27:36.845997    3381 kic.go:360] could not find the container functional-20211117142648-2140 to remove it. will try anyways
	I1117 14:27:36.846081    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:36.969429    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:27:36.969469    3381 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:36.969563    3381 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0"
	W1117 14:27:37.076863    3381 cli_runner.go:162] docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:27:37.076889    3381 oci.go:658] error shutdown functional-20211117142648-2140: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:38.080292    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:38.190544    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:38.190602    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:38.190621    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:27:38.190652    3381 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:38.745441    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:38.856896    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:38.856933    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:38.856941    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:27:38.856961    3381 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:39.945611    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:40.055529    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:40.055570    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:40.055580    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:27:40.055599    3381 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:41.369061    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:41.478760    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:41.478803    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:41.478817    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:27:41.478838    3381 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:43.069477    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:43.182845    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:43.182883    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:43.182892    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:27:43.182921    3381 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:45.533931    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:45.647004    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:45.647044    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:45.647052    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:27:45.647072    3381 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:50.156193    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:50.267447    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:50.267488    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:50.267499    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:27:50.267519    3381 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:53.492015    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:27:53.605415    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:27:53.605454    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:27:53.605463    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:27:53.605488    3381 oci.go:87] couldn't shut down functional-20211117142648-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	 
	I1117 14:27:53.605566    3381 cli_runner.go:115] Run: docker rm -f -v functional-20211117142648-2140
	I1117 14:27:53.712580    3381 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
	W1117 14:27:53.821582    3381 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
	I1117 14:27:53.821691    3381 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:27:53.928366    3381 cli_runner.go:115] Run: docker network rm functional-20211117142648-2140
	I1117 14:27:56.577075    3381 cli_runner.go:168] Completed: docker network rm functional-20211117142648-2140: (2.648616844s)
	W1117 14:27:56.577352    3381 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:27:56.577370    3381 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:27:57.583326    3381 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:27:57.632055    3381 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 14:27:57.632215    3381 start.go:160] libmachine.API.Create for "functional-20211117142648-2140" (driver="docker")
	I1117 14:27:57.632253    3381 client.go:168] LocalClient.Create starting
	I1117 14:27:57.632469    3381 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:27:57.632549    3381 main.go:130] libmachine: Decoding PEM data...
	I1117 14:27:57.632584    3381 main.go:130] libmachine: Parsing certificate...
	I1117 14:27:57.632722    3381 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:27:57.632776    3381 main.go:130] libmachine: Decoding PEM data...
	I1117 14:27:57.632801    3381 main.go:130] libmachine: Parsing certificate...
	I1117 14:27:57.633819    3381 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:27:57.743829    3381 cli_runner.go:162] docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:27:57.743926    3381 network_create.go:254] running [docker network inspect functional-20211117142648-2140] to gather additional debugging logs...
	I1117 14:27:57.743942    3381 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140
	W1117 14:27:57.851583    3381 cli_runner.go:162] docker network inspect functional-20211117142648-2140 returned with exit code 1
	I1117 14:27:57.851607    3381 network_create.go:257] error running [docker network inspect functional-20211117142648-2140]: docker network inspect functional-20211117142648-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20211117142648-2140
	I1117 14:27:57.851618    3381 network_create.go:259] output of [docker network inspect functional-20211117142648-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20211117142648-2140
	
	** /stderr **
	I1117 14:27:57.851711    3381 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:27:57.961702    3381 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00061ca50] misses:0}
	I1117 14:27:57.961737    3381 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:27:57.961753    3381 network_create.go:106] attempt to create docker network functional-20211117142648-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:27:57.961839    3381 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140
	I1117 14:28:01.867249    3381 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140: (3.905280824s)
	I1117 14:28:01.867271    3381 network_create.go:90] docker network functional-20211117142648-2140 192.168.49.0/24 created
	I1117 14:28:01.867286    3381 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117142648-2140" container
	I1117 14:28:01.867404    3381 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:28:01.993993    3381 cli_runner.go:115] Run: docker volume create functional-20211117142648-2140 --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:28:02.099154    3381 oci.go:102] Successfully created a docker volume functional-20211117142648-2140
	I1117 14:28:02.099276    3381 cli_runner.go:115] Run: docker run --rm --name functional-20211117142648-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --entrypoint /usr/bin/test -v functional-20211117142648-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:28:02.519137    3381 oci.go:106] Successfully prepared a docker volume functional-20211117142648-2140
	E1117 14:28:02.519187    3381 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:28:02.519190    3381 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:28:02.519210    3381 client.go:171] LocalClient.Create took 4.886834595s
	I1117 14:28:02.519221    3381 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:28:02.519322    3381 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:28:04.524681    3381 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:28:04.524778    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:04.660686    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:04.660770    3381 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:04.810214    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:04.923331    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:04.923468    3381 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:05.224114    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:05.352810    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:05.352909    3381 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:05.926437    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:06.047767    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	W1117 14:28:06.047853    3381 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:28:06.047866    3381 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:06.047874    3381 start.go:129] duration metric: createHost completed in 8.464307442s
	I1117 14:28:06.047946    3381 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:28:06.048007    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:06.171237    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:06.171322    3381 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:06.350676    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:06.472708    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:06.472794    3381 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:06.809021    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:06.930542    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:06.930654    3381 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:07.392322    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:07.511762    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	W1117 14:28:07.511840    3381 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:28:07.511854    3381 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:07.511861    3381 fix.go:57] fixHost completed within 31.1218436s
	I1117 14:28:07.511874    3381 start.go:80] releasing machines lock for "functional-20211117142648-2140", held for 31.121875782s
	W1117 14:28:07.511890    3381 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:28:07.512012    3381 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:28:07.512018    3381 start.go:547] Will try again in 5 seconds ...
	I1117 14:28:08.585600    3381 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.066108479s)
	I1117 14:28:08.585625    3381 kic.go:188] duration metric: took 6.066265 seconds to extract preloaded images to volume
	I1117 14:28:12.520044    3381 start.go:313] acquiring machines lock for functional-20211117142648-2140: {Name:mk0ffa36ccb8092a6f2338223436899c154ee29e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:28:12.520215    3381 start.go:317] acquired machines lock for "functional-20211117142648-2140" in 137.255µs
	I1117 14:28:12.520257    3381 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:28:12.520270    3381 fix.go:55] fixHost starting: 
	I1117 14:28:12.520715    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:12.631417    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:12.631462    3381 fix.go:108] recreateIfNeeded on functional-20211117142648-2140: state= err=unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:12.631474    3381 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:28:12.658510    3381 out.go:176] * docker "functional-20211117142648-2140" container is missing, will recreate.
	I1117 14:28:12.658556    3381 delete.go:124] DEMOLISHING functional-20211117142648-2140 ...
	I1117 14:28:12.658792    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:12.770735    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:28:12.770772    3381 stop.go:75] unable to get state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:12.771237    3381 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:12.772084    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:12.880256    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:12.880304    3381 delete.go:82] Unable to get host status for functional-20211117142648-2140, assuming it has already been deleted: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:12.880407    3381 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
	W1117 14:28:12.987720    3381 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:12.987755    3381 kic.go:360] could not find the container functional-20211117142648-2140 to remove it. will try anyways
	I1117 14:28:12.987856    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:13.095026    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:28:13.095069    3381 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:13.095174    3381 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0"
	W1117 14:28:13.202198    3381 cli_runner.go:162] docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:28:13.202225    3381 oci.go:658] error shutdown functional-20211117142648-2140: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:14.210060    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:14.322315    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:14.322356    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:14.322365    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:14.322387    3381 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:14.714203    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:14.830859    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:14.830902    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:14.830923    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:14.830945    3381 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:15.426894    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:15.537814    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:15.537857    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:15.537866    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:15.537886    3381 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:16.870205    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:16.999980    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:17.000019    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:17.000030    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:17.000049    3381 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:18.223043    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:18.334966    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:18.335002    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:18.335012    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:18.335033    3381 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:20.124110    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:20.237891    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:20.237935    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:20.237951    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:20.237974    3381 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:23.513533    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:23.621379    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:23.621420    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:23.621430    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:23.621450    3381 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:29.729844    3381 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:29.842429    3381 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:29.842468    3381 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:29.842478    3381 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:29.842502    3381 oci.go:87] couldn't shut down functional-20211117142648-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	 
	I1117 14:28:29.842586    3381 cli_runner.go:115] Run: docker rm -f -v functional-20211117142648-2140
	I1117 14:28:29.948911    3381 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
	W1117 14:28:30.056301    3381 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:30.056414    3381 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:28:30.167995    3381 cli_runner.go:115] Run: docker network rm functional-20211117142648-2140
	I1117 14:28:32.986894    3381 cli_runner.go:168] Completed: docker network rm functional-20211117142648-2140: (2.818778546s)
	W1117 14:28:32.987179    3381 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:28:32.987186    3381 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:28:33.995275    3381 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:28:34.022644    3381 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 14:28:34.022779    3381 start.go:160] libmachine.API.Create for "functional-20211117142648-2140" (driver="docker")
	I1117 14:28:34.022810    3381 client.go:168] LocalClient.Create starting
	I1117 14:28:34.023003    3381 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:28:34.023089    3381 main.go:130] libmachine: Decoding PEM data...
	I1117 14:28:34.023113    3381 main.go:130] libmachine: Parsing certificate...
	I1117 14:28:34.023218    3381 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:28:34.023272    3381 main.go:130] libmachine: Decoding PEM data...
	I1117 14:28:34.023292    3381 main.go:130] libmachine: Parsing certificate...
	I1117 14:28:34.024302    3381 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:28:34.133005    3381 cli_runner.go:162] docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:28:34.133117    3381 network_create.go:254] running [docker network inspect functional-20211117142648-2140] to gather additional debugging logs...
	I1117 14:28:34.133133    3381 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140
	W1117 14:28:34.240154    3381 cli_runner.go:162] docker network inspect functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:34.240182    3381 network_create.go:257] error running [docker network inspect functional-20211117142648-2140]: docker network inspect functional-20211117142648-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20211117142648-2140
	I1117 14:28:34.240199    3381 network_create.go:259] output of [docker network inspect functional-20211117142648-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20211117142648-2140
	
	** /stderr **
	I1117 14:28:34.240332    3381 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:28:34.350669    3381 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00061ca50] amended:false}} dirty:map[] misses:0}
	I1117 14:28:34.350711    3381 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:28:34.350934    3381 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00061ca50] amended:true}} dirty:map[192.168.49.0:0xc00061ca50 192.168.58.0:0xc0004b60d8] misses:0}
	I1117 14:28:34.350948    3381 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:28:34.350955    3381 network_create.go:106] attempt to create docker network functional-20211117142648-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:28:34.351053    3381 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140
	I1117 14:28:38.272285    3381 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140: (3.921101288s)
	I1117 14:28:38.272309    3381 network_create.go:90] docker network functional-20211117142648-2140 192.168.58.0/24 created
	I1117 14:28:38.272328    3381 kic.go:106] calculated static IP "192.168.58.2" for the "functional-20211117142648-2140" container
	I1117 14:28:38.272434    3381 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:28:38.378431    3381 cli_runner.go:115] Run: docker volume create functional-20211117142648-2140 --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:28:38.486859    3381 oci.go:102] Successfully created a docker volume functional-20211117142648-2140
	I1117 14:28:38.486992    3381 cli_runner.go:115] Run: docker run --rm --name functional-20211117142648-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --entrypoint /usr/bin/test -v functional-20211117142648-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:28:38.901896    3381 oci.go:106] Successfully prepared a docker volume functional-20211117142648-2140
	E1117 14:28:38.901943    3381 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:28:38.901946    3381 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:28:38.901955    3381 client.go:171] LocalClient.Create took 4.879025812s
	I1117 14:28:38.901977    3381 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:28:38.902078    3381 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:28:40.902817    3381 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:28:40.902907    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:41.033155    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:41.033246    3381 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:41.232301    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:41.346968    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:41.347053    3381 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:41.646351    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:41.761763    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:41.761848    3381 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:42.468146    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:42.594331    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	W1117 14:28:42.594416    3381 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:28:42.594444    3381 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:42.594455    3381 start.go:129] duration metric: createHost completed in 8.598905755s
	I1117 14:28:42.594518    3381 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:28:42.594586    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:42.724568    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:42.724719    3381 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:43.074485    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:43.195242    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:43.195321    3381 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:43.644692    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:43.768426    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:43.768514    3381 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:44.346914    3381 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:28:44.454538    3381 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	W1117 14:28:44.454620    3381 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:28:44.454632    3381 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:44.454648    3381 fix.go:57] fixHost completed within 31.933644402s
	I1117 14:28:44.454656    3381 start.go:80] releasing machines lock for "functional-20211117142648-2140", held for 31.933691008s
	W1117 14:28:44.454842    3381 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117142648-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117142648-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:28:44.511927    3381 out.go:176] 
	W1117 14:28:44.512130    3381 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:28:44.512145    3381 out.go:241] * 
	* 
	W1117 14:28:44.513201    3381 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:28:44.606950    3381 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:602: failed to soft start minikube. args "out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --alsologtostderr -v=8": exit status 80
functional_test.go:604: soft start took 1m9.347235496s for "functional-20211117142648-2140" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "b159ef8ccb57d8af565b0c37d4dacfc0ccdf25bbe3bc274f30c2fda8f6557c59",
	        "Created": "2021-11-17T22:28:34.475622498Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (147.895227ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:28:44.925436    3694 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/SoftStart (69.62s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:622: (dbg) Run:  kubectl config current-context
functional_test.go:622: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (37.003562ms)

                                                
                                                
** stderr ** 
	W1117 14:28:44.962775    3699 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:624: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:628: expected current-context = "functional-20211117142648-2140", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "b159ef8ccb57d8af565b0c37d4dacfc0ccdf25bbe3bc274f30c2fda8f6557c59",
	        "Created": "2021-11-17T22:28:34.475622498Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (148.920267ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:28:45.226164    3704 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubeContext (0.30s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:637: (dbg) Run:  kubectl --context functional-20211117142648-2140 get po -A
functional_test.go:637: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 get po -A: exit status 1 (37.499856ms)

                                                
                                                
** stderr ** 
	W1117 14:28:45.263980    3709 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test.go:639: failed to get kubectl pods: args "kubectl --context functional-20211117142648-2140 get po -A" : exit status 1
functional_test.go:643: expected stderr to be empty but got *"W1117 14:28:45.263980    3709 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig\nError in configuration: context was not found for specified context: functional-20211117142648-2140\n"*: args "kubectl --context functional-20211117142648-2140 get po -A"
functional_test.go:646: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-20211117142648-2140 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "b159ef8ccb57d8af565b0c37d4dacfc0ccdf25bbe3bc274f30c2fda8f6557c59",
	        "Created": "2021-11-17T22:28:34.475622498Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (150.462198ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:28:45.527743    3714 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:3.1
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:3.1: exit status 10 (107.947585ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_3.1": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.1
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_1ee7f0edc085faba6c5c2cd5567d37f230636116_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.1". args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:3.1" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:3.3
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:3.3: exit status 10 (98.733546ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_3.3": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:3.3
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_de8128d312e6d2ac89c1c5074cd22b7974c28c2b_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:3.3". args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:3.3" err exit status 10
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:latest
functional_test.go:983: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:latest: exit status 10 (98.794598ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: save to dir: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_latest": write: unable to calculate manifest: Error: No such image: k8s.gcr.io/pause:latest
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_5aa7605f63066fc2b7f8379478b9def700202ac8_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:985: failed to 'cache add' remote image "k8s.gcr.io/pause:latest". args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add k8s.gcr.io/pause:latest" err exit status 10
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_remote (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
functional_test.go:1039: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3: exit status 30 (91.236148ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_3.3: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_e17e40910561608ab15e9700ab84b4e1db856f38_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1041: failed to delete image k8s.gcr.io/pause:3.3 from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1047: (dbg) Run:  out/minikube-darwin-amd64 cache list
functional_test.go:1052: expected 'cache list' output to include 'k8s.gcr.io/pause' but got: ******
--- FAIL: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1061: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo crictl images
functional_test.go:1061: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo crictl images: exit status 80 (205.522887ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_6599ef642588877027e69d7c08a478c21d2be2a6_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1063: failed to get images by "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo crictl images" ssh exit status 80
functional_test.go:1067: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_6599ef642588877027e69d7c08a478c21d2be2a6_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo docker rmi k8s.gcr.io/pause:latest: exit status 80 (202.016356ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_f6cc923efa9cb983c5688c815b9a26138561eb5d_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to manually delete image "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo docker rmi k8s.gcr.io/pause:latest" : exit status 80
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (208.376994ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_faf3f1cd86a795397a09a2748fe4ee3bd5d83e42_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache reload
functional_test.go:1100: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1100: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 80 (199.983547ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_faf3f1cd86a795397a09a2748fe4ee3bd5d83e42_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1102: expected "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh sudo crictl inspecti k8s.gcr.io/pause:latest" to run successfully but got error: exit status 80
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1109: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1: exit status 30 (92.111743ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_3.1: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_d1b33253e7334db9f364f7cea75d63fe683cad74_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:3.1 from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1": exit status 30
functional_test.go:1109: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
functional_test.go:1109: (dbg) Non-zero exit: out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest: exit status 30 (92.193982ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: remove /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_latest: no such file or directory
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cache_d17bcf228b7a032ee268baa189bce7c5c7938c34_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1111: failed to delete k8s.gcr.io/pause:latest from cache. args "out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest": exit status 30
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:657: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 kubectl -- --context functional-20211117142648-2140 get pods
functional_test.go:657: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 kubectl -- --context functional-20211117142648-2140 get pods: exit status 1 (448.461786ms)

                                                
                                                
** stderr ** 
	W1117 14:28:49.393701    3775 loader.go:221] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117142648-2140
	* no server found for cluster "functional-20211117142648-2140"

                                                
                                                
** /stderr **
functional_test.go:660: failed to get pods. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 kubectl -- --context functional-20211117142648-2140 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "b159ef8ccb57d8af565b0c37d4dacfc0ccdf25bbe3bc274f30c2fda8f6557c59",
	        "Created": "2021-11-17T22:28:34.475622498Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (148.202938ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:28:49.653856    3780 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.71s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.76s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:682: (dbg) Run:  out/kubectl --context functional-20211117142648-2140 get pods
functional_test.go:682: (dbg) Non-zero exit: out/kubectl --context functional-20211117142648-2140 get pods: exit status 1 (504.576191ms)

                                                
                                                
** stderr ** 
	W1117 14:28:50.158094    3786 loader.go:221] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: 
	* context was not found for specified context: functional-20211117142648-2140
	* no server found for cluster "functional-20211117142648-2140"

                                                
                                                
** /stderr **
functional_test.go:685: failed to run kubectl directly. args "out/kubectl --context functional-20211117142648-2140 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "b159ef8ccb57d8af565b0c37d4dacfc0ccdf25bbe3bc274f30c2fda8f6557c59",
	        "Created": "2021-11-17T22:28:34.475622498Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (146.154277ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:28:50.414823    3792 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.76s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (69.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:698: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:698: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (1m9.408150061s)

                                                
                                                
-- stdout --
	* [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20211117142648-2140 in cluster functional-20211117142648-2140
	* Pulling base image ...
	* docker "functional-20211117142648-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* docker "functional-20211117142648-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:29:17.566432    3797 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:29:53.943284    3797 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p functional-20211117142648-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:700: failed to restart minikube. args "out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:702: restart took 1m9.408318465s for "functional-20211117142648-2140" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (152.96762ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:00.089066    4119 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ExtraConfig (69.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:752: (dbg) Run:  kubectl --context functional-20211117142648-2140 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:752: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (37.174907ms)

                                                
                                                
** stderr ** 
	W1117 14:30:00.126517    4125 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	error: context "functional-20211117142648-2140" does not exist

                                                
                                                
** /stderr **
functional_test.go:754: failed to get components. args "kubectl --context functional-20211117142648-2140 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (150.710134ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:00.389329    4130 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/serial/ComponentHealth (0.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 logs
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 logs: exit status 80 (404.352054ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                           Args                           |               Profile               |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                                    | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:08 PST | Wed, 17 Nov 2021 14:24:09 PST |
	| delete  | -p                                                       | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:09 PST | Wed, 17 Nov 2021 14:24:10 PST |
	|         | download-only-20211117142321-2140                        |                                     |         |         |                               |                               |
	| delete  | -p                                                       | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:10 PST | Wed, 17 Nov 2021 14:24:10 PST |
	|         | download-only-20211117142321-2140                        |                                     |         |         |                               |                               |
	| delete  | -p                                                       | download-docker-20211117142410-2140 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:19 PST | Wed, 17 Nov 2021 14:24:20 PST |
	|         | download-docker-20211117142410-2140                      |                                     |         |         |                               |                               |
	| delete  | -p addons-20211117142420-2140                            | addons-20211117142420-2140          | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:25:06 PST | Wed, 17 Nov 2021 14:25:10 PST |
	| delete  | -p nospam-20211117142510-2140                            | nospam-20211117142510-2140          | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:26:44 PST | Wed, 17 Nov 2021 14:26:48 PST |
	| -p      | functional-20211117142648-2140 cache add                 | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:46 PST | Wed, 17 Nov 2021 14:28:47 PST |
	|         | minikube-local-cache-test:functional-20211117142648-2140 |                                     |         |         |                               |                               |
	| -p      | functional-20211117142648-2140 cache delete              | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:47 PST | Wed, 17 Nov 2021 14:28:47 PST |
	|         | minikube-local-cache-test:functional-20211117142648-2140 |                                     |         |         |                               |                               |
	| cache   | list                                                     | minikube                            | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:47 PST | Wed, 17 Nov 2021 14:28:47 PST |
	| -p      | functional-20211117142648-2140                           | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:48 PST | Wed, 17 Nov 2021 14:28:48 PST |
	|         | cache reload                                             |                                     |         |         |                               |                               |
	|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 14:28:50
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 14:28:50.453976    3797 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:28:50.454101    3797 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:28:50.454104    3797 out.go:310] Setting ErrFile to fd 2...
	I1117 14:28:50.454106    3797 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:28:50.454178    3797 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:28:50.454455    3797 out.go:304] Setting JSON to false
	I1117 14:28:50.479425    3797 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1705,"bootTime":1637186425,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:28:50.479515    3797 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:28:50.506691    3797 out.go:176] * [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:28:50.506942    3797 notify.go:174] Checking for updates...
	I1117 14:28:50.554344    3797 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:28:50.580007    3797 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:28:50.606367    3797 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:28:50.632166    3797 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:28:50.632507    3797 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:28:50.632539    3797 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:28:50.727825    3797 docker.go:132] docker version: linux-20.10.6
	I1117 14:28:50.727938    3797 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:28:50.904208    3797 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 22:28:50.843477149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:28:50.952845    3797 out.go:176] * Using the docker driver based on existing profile
	I1117 14:28:50.952891    3797 start.go:280] selected driver: docker
	I1117 14:28:50.952900    3797 start.go:775] validating driver "docker" against &{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:28:50.953010    3797 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:28:50.953389    3797 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:28:51.130521    3797 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 22:28:51.070382389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:28:51.132531    3797 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 14:28:51.132556    3797 cni.go:93] Creating CNI manager for ""
	I1117 14:28:51.132561    3797 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:28:51.132572    3797 start_flags.go:282] config:
	{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:28:51.181173    3797 out.go:176] * Starting control plane node functional-20211117142648-2140 in cluster functional-20211117142648-2140
	I1117 14:28:51.181244    3797 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:28:51.207291    3797 out.go:176] * Pulling base image ...
	I1117 14:28:51.207344    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:28:51.207423    3797 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:28:51.207444    3797 cache.go:57] Caching tarball of preloaded images
	I1117 14:28:51.207450    3797 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:28:51.207661    3797 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:28:51.207678    3797 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:28:51.208383    3797 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/functional-20211117142648-2140/config.json ...
	I1117 14:28:51.325648    3797 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:28:51.325656    3797 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:28:51.325664    3797 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:28:51.325712    3797 start.go:313] acquiring machines lock for functional-20211117142648-2140: {Name:mk0ffa36ccb8092a6f2338223436899c154ee29e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:28:51.325787    3797 start.go:317] acquired machines lock for "functional-20211117142648-2140" in 59.856µs
	I1117 14:28:51.325808    3797 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:28:51.325812    3797 fix.go:55] fixHost starting: 
	I1117 14:28:51.326074    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:51.433132    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:51.433190    3797 fix.go:108] recreateIfNeeded on functional-20211117142648-2140: state= err=unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:51.433214    3797 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:28:51.460197    3797 out.go:176] * docker "functional-20211117142648-2140" container is missing, will recreate.
	I1117 14:28:51.460232    3797 delete.go:124] DEMOLISHING functional-20211117142648-2140 ...
	I1117 14:28:51.460438    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:51.568262    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:28:51.568308    3797 stop.go:75] unable to get state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:51.568320    3797 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:51.568707    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:51.681084    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:51.681119    3797 delete.go:82] Unable to get host status for functional-20211117142648-2140, assuming it has already been deleted: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:51.681215    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
	W1117 14:28:51.790216    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
	I1117 14:28:51.790245    3797 kic.go:360] could not find the container functional-20211117142648-2140 to remove it. will try anyways
	I1117 14:28:51.790342    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:51.896671    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:28:51.896705    3797 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:51.896802    3797 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0"
	W1117 14:28:52.022260    3797 cli_runner.go:162] docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:28:52.022289    3797 oci.go:658] error shutdown functional-20211117142648-2140: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:53.022960    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:53.131118    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:53.131161    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:53.131173    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:53.131206    3797 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:53.684125    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:53.793298    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:53.793336    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:53.793344    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:53.793361    3797 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:54.882327    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:54.993642    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:54.993682    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:54.993692    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:54.993712    3797 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:56.311177    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:56.421198    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:56.421230    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:56.421235    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:56.421254    3797 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:58.005567    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:28:58.113113    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:28:58.113150    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:28:58.113161    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:28:58.113197    3797 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:00.459602    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:00.571126    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:00.571164    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:00.571172    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:00.571190    3797 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:05.080127    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:05.195193    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:05.195225    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:05.195242    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:05.195261    3797 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:08.422668    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:08.537181    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:08.537214    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:08.537220    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:08.537240    3797 oci.go:87] couldn't shut down functional-20211117142648-2140 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	 
	I1117 14:29:08.537331    3797 cli_runner.go:115] Run: docker rm -f -v functional-20211117142648-2140
	I1117 14:29:08.645821    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
	W1117 14:29:08.752869    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:08.752987    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:29:08.858990    3797 cli_runner.go:115] Run: docker network rm functional-20211117142648-2140
	I1117 14:29:11.593581    3797 cli_runner.go:168] Completed: docker network rm functional-20211117142648-2140: (2.734480004s)
	W1117 14:29:11.593836    3797 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:29:11.593840    3797 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:29:12.602577    3797 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:29:12.651696    3797 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 14:29:12.651906    3797 start.go:160] libmachine.API.Create for "functional-20211117142648-2140" (driver="docker")
	I1117 14:29:12.651941    3797 client.go:168] LocalClient.Create starting
	I1117 14:29:12.652118    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:29:12.652191    3797 main.go:130] libmachine: Decoding PEM data...
	I1117 14:29:12.652216    3797 main.go:130] libmachine: Parsing certificate...
	I1117 14:29:12.652330    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:29:12.652377    3797 main.go:130] libmachine: Decoding PEM data...
	I1117 14:29:12.652388    3797 main.go:130] libmachine: Parsing certificate...
	I1117 14:29:12.653472    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:29:12.763983    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:29:12.764074    3797 network_create.go:254] running [docker network inspect functional-20211117142648-2140] to gather additional debugging logs...
	I1117 14:29:12.764090    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140
	W1117 14:29:12.872980    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:12.872997    3797 network_create.go:257] error running [docker network inspect functional-20211117142648-2140]: docker network inspect functional-20211117142648-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20211117142648-2140
	I1117 14:29:12.873008    3797 network_create.go:259] output of [docker network inspect functional-20211117142648-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20211117142648-2140
	
	** /stderr **
	I1117 14:29:12.873089    3797 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:29:12.982933    3797 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001321b8] misses:0}
	I1117 14:29:12.982963    3797 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:29:12.982977    3797 network_create.go:106] attempt to create docker network functional-20211117142648-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:29:12.983057    3797 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140
	I1117 14:29:16.892559    3797 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140: (3.909365408s)
	I1117 14:29:16.892578    3797 network_create.go:90] docker network functional-20211117142648-2140 192.168.49.0/24 created
	I1117 14:29:16.892596    3797 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117142648-2140" container
	I1117 14:29:16.892699    3797 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:29:17.020685    3797 cli_runner.go:115] Run: docker volume create functional-20211117142648-2140 --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:29:17.129972    3797 oci.go:102] Successfully created a docker volume functional-20211117142648-2140
	I1117 14:29:17.130089    3797 cli_runner.go:115] Run: docker run --rm --name functional-20211117142648-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --entrypoint /usr/bin/test -v functional-20211117142648-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:29:17.566365    3797 oci.go:106] Successfully prepared a docker volume functional-20211117142648-2140
	E1117 14:29:17.566432    3797 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:29:17.566436    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:29:17.566457    3797 client.go:171] LocalClient.Create took 4.914397556s
	I1117 14:29:17.566461    3797 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:29:17.566568    3797 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:29:19.568044    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:29:19.568135    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:19.704301    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:19.704396    3797 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:19.853899    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:19.973058    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:19.973140    3797 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:20.278372    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:20.398567    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:20.398644    3797 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:20.969945    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:21.092330    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	W1117 14:29:21.092412    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:29:21.092425    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:21.092433    3797 start.go:129] duration metric: createHost completed in 8.489649896s
	I1117 14:29:21.092493    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:29:21.092557    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:21.212184    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:21.212259    3797 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:21.391281    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:21.515478    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:21.515547    3797 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:21.846045    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:21.975168    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:21.975244    3797 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:22.441449    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:22.558262    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	W1117 14:29:22.558335    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:29:22.558346    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:22.558362    3797 fix.go:57] fixHost completed within 31.231821737s
	I1117 14:29:22.558368    3797 start.go:80] releasing machines lock for "functional-20211117142648-2140", held for 31.231855741s
	W1117 14:29:22.558382    3797 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:29:22.558543    3797 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:29:22.558553    3797 start.go:547] Will try again in 5 seconds ...
	I1117 14:29:23.490472    3797 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.923720943s)
	I1117 14:29:23.490488    3797 kic.go:188] duration metric: took 5.923890 seconds to extract preloaded images to volume
	I1117 14:29:27.568571    3797 start.go:313] acquiring machines lock for functional-20211117142648-2140: {Name:mk0ffa36ccb8092a6f2338223436899c154ee29e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:29:27.568718    3797 start.go:317] acquired machines lock for "functional-20211117142648-2140" in 127.45µs
	I1117 14:29:27.568753    3797 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:29:27.568758    3797 fix.go:55] fixHost starting: 
	I1117 14:29:27.569179    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:27.683038    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:27.683071    3797 fix.go:108] recreateIfNeeded on functional-20211117142648-2140: state= err=unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:27.683078    3797 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:29:27.731919    3797 out.go:176] * docker "functional-20211117142648-2140" container is missing, will recreate.
	I1117 14:29:27.731944    3797 delete.go:124] DEMOLISHING functional-20211117142648-2140 ...
	I1117 14:29:27.732141    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:27.840313    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:29:27.840353    3797 stop.go:75] unable to get state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:27.840363    3797 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:27.841673    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:27.950637    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:27.950673    3797 delete.go:82] Unable to get host status for functional-20211117142648-2140, assuming it has already been deleted: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:27.950773    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
	W1117 14:29:28.060256    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:28.060281    3797 kic.go:360] could not find the container functional-20211117142648-2140 to remove it. will try anyways
	I1117 14:29:28.060396    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:28.167170    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:29:28.167205    3797 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:28.167311    3797 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0"
	W1117 14:29:28.276287    3797 cli_runner.go:162] docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:29:28.276303    3797 oci.go:658] error shutdown functional-20211117142648-2140: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:29.278223    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:29.389337    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:29.389369    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:29.389376    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:29.389393    3797 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:29.786102    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:29.898246    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:29.898277    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:29.898285    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:29.898304    3797 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:30.496761    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:30.607973    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:30.608013    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:30.608030    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:30.608052    3797 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:31.936732    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:32.051601    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:32.051637    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:32.051651    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:32.051670    3797 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:33.264454    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:33.374141    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:33.374177    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:33.374189    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:33.374206    3797 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:35.158716    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:35.265863    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:35.265895    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:35.265910    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:35.265931    3797 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:38.535473    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:38.647715    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:38.647755    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:38.647764    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:38.647780    3797 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:44.750001    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:29:44.865258    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:29:44.865291    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:44.865296    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
	I1117 14:29:44.865318    3797 oci.go:87] couldn't shut down functional-20211117142648-2140 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	 
	I1117 14:29:44.865428    3797 cli_runner.go:115] Run: docker rm -f -v functional-20211117142648-2140
	I1117 14:29:44.974014    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
	W1117 14:29:45.083280    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:45.083390    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:29:45.195503    3797 cli_runner.go:115] Run: docker network rm functional-20211117142648-2140
	I1117 14:29:48.047685    3797 cli_runner.go:168] Completed: docker network rm functional-20211117142648-2140: (2.852071943s)
	W1117 14:29:48.047961    3797 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:29:48.047965    3797 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:29:49.048256    3797 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:29:49.075621    3797 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1117 14:29:49.075837    3797 start.go:160] libmachine.API.Create for "functional-20211117142648-2140" (driver="docker")
	I1117 14:29:49.075894    3797 client.go:168] LocalClient.Create starting
	I1117 14:29:49.076071    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:29:49.076148    3797 main.go:130] libmachine: Decoding PEM data...
	I1117 14:29:49.076172    3797 main.go:130] libmachine: Parsing certificate...
	I1117 14:29:49.076267    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:29:49.076321    3797 main.go:130] libmachine: Decoding PEM data...
	I1117 14:29:49.076339    3797 main.go:130] libmachine: Parsing certificate...
	I1117 14:29:49.098184    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:29:49.208567    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:29:49.208705    3797 network_create.go:254] running [docker network inspect functional-20211117142648-2140] to gather additional debugging logs...
	I1117 14:29:49.208724    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140
	W1117 14:29:49.317875    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:49.317896    3797 network_create.go:257] error running [docker network inspect functional-20211117142648-2140]: docker network inspect functional-20211117142648-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: functional-20211117142648-2140
	I1117 14:29:49.317907    3797 network_create.go:259] output of [docker network inspect functional-20211117142648-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: functional-20211117142648-2140
	
	** /stderr **
	I1117 14:29:49.318014    3797 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:29:49.426575    3797 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001321b8] amended:false}} dirty:map[] misses:0}
	I1117 14:29:49.426600    3797 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:29:49.426769    3797 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001321b8] amended:true}} dirty:map[192.168.49.0:0xc0001321b8 192.168.58.0:0xc000186290] misses:0}
	I1117 14:29:49.426781    3797 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:29:49.426786    3797 network_create.go:106] attempt to create docker network functional-20211117142648-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:29:49.426872    3797 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140
	I1117 14:29:53.299742    3797 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140: (3.872719387s)
	I1117 14:29:53.299761    3797 network_create.go:90] docker network functional-20211117142648-2140 192.168.58.0/24 created
	I1117 14:29:53.299783    3797 kic.go:106] calculated static IP "192.168.58.2" for the "functional-20211117142648-2140" container
	I1117 14:29:53.299896    3797 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:29:53.409808    3797 cli_runner.go:115] Run: docker volume create functional-20211117142648-2140 --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:29:53.515033    3797 oci.go:102] Successfully created a docker volume functional-20211117142648-2140
	I1117 14:29:53.515165    3797 cli_runner.go:115] Run: docker run --rm --name functional-20211117142648-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --entrypoint /usr/bin/test -v functional-20211117142648-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:29:53.943240    3797 oci.go:106] Successfully prepared a docker volume functional-20211117142648-2140
	E1117 14:29:53.943284    3797 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:29:53.943289    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:29:53.943298    3797 client.go:171] LocalClient.Create took 4.867288969s
	I1117 14:29:53.943310    3797 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:29:53.943404    3797 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:29:55.945897    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:29:55.945981    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:56.083098    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:56.083184    3797 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:56.281845    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:56.398584    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:56.398759    3797 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:56.704670    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:56.826451    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:56.826537    3797 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:57.531465    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:57.650557    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	W1117 14:29:57.650654    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:29:57.650688    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:57.650703    3797 start.go:129] duration metric: createHost completed in 8.602237402s
	I1117 14:29:57.650776    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:29:57.650843    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:57.796584    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:57.796668    3797 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:58.138508    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:58.282404    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:58.282484    3797 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:58.731488    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:58.858988    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	I1117 14:29:58.859067    3797 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:59.437319    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
	W1117 14:29:59.547536    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
	W1117 14:29:59.547624    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:29:59.547635    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	I1117 14:29:59.547645    3797 fix.go:57] fixHost completed within 31.978150812s
	I1117 14:29:59.547654    3797 start.go:80] releasing machines lock for "functional-20211117142648-2140", held for 31.978191455s
	W1117 14:29:59.547789    3797 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117142648-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:29:59.601127    3797 out.go:176] 
	W1117 14:29:59.601390    3797 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:29:59.601403    3797 out.go:241] * 
	W1117 14:29:59.602581    3797 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1175: out/minikube-darwin-amd64 -p functional-20211117142648-2140 logs failed: exit status 80
functional_test.go:1165: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command |                           Args                           |               Profile               |  User   | Version |          Start Time           |           End Time            |
|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete  | --all                                                    | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:08 PST | Wed, 17 Nov 2021 14:24:09 PST |
| delete  | -p                                                       | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:09 PST | Wed, 17 Nov 2021 14:24:10 PST |
|         | download-only-20211117142321-2140                        |                                     |         |         |                               |                               |
| delete  | -p                                                       | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:10 PST | Wed, 17 Nov 2021 14:24:10 PST |
|         | download-only-20211117142321-2140                        |                                     |         |         |                               |                               |
| delete  | -p                                                       | download-docker-20211117142410-2140 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:19 PST | Wed, 17 Nov 2021 14:24:20 PST |
|         | download-docker-20211117142410-2140                      |                                     |         |         |                               |                               |
| delete  | -p addons-20211117142420-2140                            | addons-20211117142420-2140          | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:25:06 PST | Wed, 17 Nov 2021 14:25:10 PST |
| delete  | -p nospam-20211117142510-2140                            | nospam-20211117142510-2140          | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:26:44 PST | Wed, 17 Nov 2021 14:26:48 PST |
| -p      | functional-20211117142648-2140 cache add                 | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:46 PST | Wed, 17 Nov 2021 14:28:47 PST |
|         | minikube-local-cache-test:functional-20211117142648-2140 |                                     |         |         |                               |                               |
| -p      | functional-20211117142648-2140 cache delete              | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:47 PST | Wed, 17 Nov 2021 14:28:47 PST |
|         | minikube-local-cache-test:functional-20211117142648-2140 |                                     |         |         |                               |                               |
| cache   | list                                                     | minikube                            | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:47 PST | Wed, 17 Nov 2021 14:28:47 PST |
| -p      | functional-20211117142648-2140                           | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:48 PST | Wed, 17 Nov 2021 14:28:48 PST |
|         | cache reload                                             |                                     |         |         |                               |                               |
|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2021/11/17 14:28:50
Running on machine: 37310
Binary: Built with gc go1.17.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1117 14:28:50.453976    3797 out.go:297] Setting OutFile to fd 1 ...
I1117 14:28:50.454101    3797 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 14:28:50.454104    3797 out.go:310] Setting ErrFile to fd 2...
I1117 14:28:50.454106    3797 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 14:28:50.454178    3797 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
I1117 14:28:50.454455    3797 out.go:304] Setting JSON to false
I1117 14:28:50.479425    3797 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1705,"bootTime":1637186425,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W1117 14:28:50.479515    3797 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1117 14:28:50.506691    3797 out.go:176] * [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
I1117 14:28:50.506942    3797 notify.go:174] Checking for updates...
I1117 14:28:50.554344    3797 out.go:176]   - MINIKUBE_LOCATION=12739
I1117 14:28:50.580007    3797 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
I1117 14:28:50.606367    3797 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
I1117 14:28:50.632166    3797 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
I1117 14:28:50.632507    3797 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 14:28:50.632539    3797 driver.go:343] Setting default libvirt URI to qemu:///system
I1117 14:28:50.727825    3797 docker.go:132] docker version: linux-20.10.6
I1117 14:28:50.727938    3797 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 14:28:50.904208    3797 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 22:28:50.843477149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
I1117 14:28:50.952845    3797 out.go:176] * Using the docker driver based on existing profile
I1117 14:28:50.952891    3797 start.go:280] selected driver: docker
I1117 14:28:50.952900    3797 start.go:775] validating driver "docker" against &{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1117 14:28:50.953010    3797 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1117 14:28:50.953389    3797 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 14:28:51.130521    3797 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 22:28:51.070382389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
I1117 14:28:51.132531    3797 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1117 14:28:51.132556    3797 cni.go:93] Creating CNI manager for ""
I1117 14:28:51.132561    3797 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1117 14:28:51.132572    3797 start_flags.go:282] config:
{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1117 14:28:51.181173    3797 out.go:176] * Starting control plane node functional-20211117142648-2140 in cluster functional-20211117142648-2140
I1117 14:28:51.181244    3797 cache.go:118] Beginning downloading kic base image for docker with docker
I1117 14:28:51.207291    3797 out.go:176] * Pulling base image ...
I1117 14:28:51.207344    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 14:28:51.207423    3797 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
I1117 14:28:51.207444    3797 cache.go:57] Caching tarball of preloaded images
I1117 14:28:51.207450    3797 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I1117 14:28:51.207661    3797 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1117 14:28:51.207678    3797 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
I1117 14:28:51.208383    3797 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/functional-20211117142648-2140/config.json ...
I1117 14:28:51.325648    3797 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I1117 14:28:51.325656    3797 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I1117 14:28:51.325664    3797 cache.go:206] Successfully downloaded all kic artifacts
I1117 14:28:51.325712    3797 start.go:313] acquiring machines lock for functional-20211117142648-2140: {Name:mk0ffa36ccb8092a6f2338223436899c154ee29e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 14:28:51.325787    3797 start.go:317] acquired machines lock for "functional-20211117142648-2140" in 59.856µs
I1117 14:28:51.325808    3797 start.go:93] Skipping create...Using existing machine configuration
I1117 14:28:51.325812    3797 fix.go:55] fixHost starting: 
I1117 14:28:51.326074    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:51.433132    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:51.433190    3797 fix.go:108] recreateIfNeeded on functional-20211117142648-2140: state= err=unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.433214    3797 fix.go:113] machineExists: false. err=machine does not exist
I1117 14:28:51.460197    3797 out.go:176] * docker "functional-20211117142648-2140" container is missing, will recreate.
I1117 14:28:51.460232    3797 delete.go:124] DEMOLISHING functional-20211117142648-2140 ...
I1117 14:28:51.460438    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:51.568262    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
W1117 14:28:51.568308    3797 stop.go:75] unable to get state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.568320    3797 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.568707    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:51.681084    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:51.681119    3797 delete.go:82] Unable to get host status for functional-20211117142648-2140, assuming it has already been deleted: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.681215    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
W1117 14:28:51.790216    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
I1117 14:28:51.790245    3797 kic.go:360] could not find the container functional-20211117142648-2140 to remove it. will try anyways
I1117 14:28:51.790342    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:51.896671    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
W1117 14:28:51.896705    3797 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.896802    3797 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0"
W1117 14:28:52.022260    3797 cli_runner.go:162] docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 14:28:52.022289    3797 oci.go:658] error shutdown functional-20211117142648-2140: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:53.022960    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:53.131118    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:53.131161    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:53.131173    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:53.131206    3797 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:53.684125    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:53.793298    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:53.793336    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:53.793344    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:53.793361    3797 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:54.882327    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:54.993642    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:54.993682    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:54.993692    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:54.993712    3797 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:56.311177    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:56.421198    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:56.421230    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:56.421235    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:56.421254    3797 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:58.005567    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:58.113113    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:58.113150    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:58.113161    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:58.113197    3797 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:00.459602    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:00.571126    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:00.571164    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:00.571172    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:00.571190    3797 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:05.080127    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:05.195193    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:05.195225    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:05.195242    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:05.195261    3797 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:08.422668    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:08.537181    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:08.537214    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:08.537220    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:08.537240    3797 oci.go:87] couldn't shut down functional-20211117142648-2140 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
I1117 14:29:08.537331    3797 cli_runner.go:115] Run: docker rm -f -v functional-20211117142648-2140
I1117 14:29:08.645821    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
W1117 14:29:08.752869    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
I1117 14:29:08.752987    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 14:29:08.858990    3797 cli_runner.go:115] Run: docker network rm functional-20211117142648-2140
I1117 14:29:11.593581    3797 cli_runner.go:168] Completed: docker network rm functional-20211117142648-2140: (2.734480004s)
W1117 14:29:11.593836    3797 delete.go:139] delete failed (probably ok) <nil>
I1117 14:29:11.593840    3797 fix.go:120] Sleeping 1 second for extra luck!
I1117 14:29:12.602577    3797 start.go:126] createHost starting for "" (driver="docker")
I1117 14:29:12.651696    3797 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 14:29:12.651906    3797 start.go:160] libmachine.API.Create for "functional-20211117142648-2140" (driver="docker")
I1117 14:29:12.651941    3797 client.go:168] LocalClient.Create starting
I1117 14:29:12.652118    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
I1117 14:29:12.652191    3797 main.go:130] libmachine: Decoding PEM data...
I1117 14:29:12.652216    3797 main.go:130] libmachine: Parsing certificate...
I1117 14:29:12.652330    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
I1117 14:29:12.652377    3797 main.go:130] libmachine: Decoding PEM data...
I1117 14:29:12.652388    3797 main.go:130] libmachine: Parsing certificate...
I1117 14:29:12.653472    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1117 14:29:12.763983    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1117 14:29:12.764074    3797 network_create.go:254] running [docker network inspect functional-20211117142648-2140] to gather additional debugging logs...
I1117 14:29:12.764090    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140
W1117 14:29:12.872980    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 returned with exit code 1
I1117 14:29:12.872997    3797 network_create.go:257] error running [docker network inspect functional-20211117142648-2140]: docker network inspect functional-20211117142648-2140: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20211117142648-2140
I1117 14:29:12.873008    3797 network_create.go:259] output of [docker network inspect functional-20211117142648-2140]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20211117142648-2140

                                                
                                                
** /stderr **
I1117 14:29:12.873089    3797 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 14:29:12.982933    3797 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001321b8] misses:0}
I1117 14:29:12.982963    3797 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 14:29:12.982977    3797 network_create.go:106] attempt to create docker network functional-20211117142648-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1117 14:29:12.983057    3797 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140
I1117 14:29:16.892559    3797 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140: (3.909365408s)
I1117 14:29:16.892578    3797 network_create.go:90] docker network functional-20211117142648-2140 192.168.49.0/24 created
I1117 14:29:16.892596    3797 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117142648-2140" container
I1117 14:29:16.892699    3797 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 14:29:17.020685    3797 cli_runner.go:115] Run: docker volume create functional-20211117142648-2140 --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --label created_by.minikube.sigs.k8s.io=true
I1117 14:29:17.129972    3797 oci.go:102] Successfully created a docker volume functional-20211117142648-2140
I1117 14:29:17.130089    3797 cli_runner.go:115] Run: docker run --rm --name functional-20211117142648-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --entrypoint /usr/bin/test -v functional-20211117142648-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 14:29:17.566365    3797 oci.go:106] Successfully prepared a docker volume functional-20211117142648-2140
E1117 14:29:17.566432    3797 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
I1117 14:29:17.566436    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 14:29:17.566457    3797 client.go:171] LocalClient.Create took 4.914397556s
I1117 14:29:17.566461    3797 kic.go:179] Starting extracting preloaded images to volume ...
I1117 14:29:17.566568    3797 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 14:29:19.568044    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 14:29:19.568135    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:19.704301    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:19.704396    3797 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:19.853899    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:19.973058    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:19.973140    3797 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:20.278372    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:20.398567    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:20.398644    3797 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:20.969945    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:21.092330    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
W1117 14:29:21.092412    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:29:21.092425    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:21.092433    3797 start.go:129] duration metric: createHost completed in 8.489649896s
I1117 14:29:21.092493    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 14:29:21.092557    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:21.212184    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:21.212259    3797 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:21.391281    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:21.515478    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:21.515547    3797 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:21.846045    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:21.975168    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:21.975244    3797 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:22.441449    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:22.558262    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
W1117 14:29:22.558335    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:29:22.558346    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:22.558362    3797 fix.go:57] fixHost completed within 31.231821737s
I1117 14:29:22.558368    3797 start.go:80] releasing machines lock for "functional-20211117142648-2140", held for 31.231855741s
W1117 14:29:22.558382    3797 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 14:29:22.558543    3797 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 14:29:22.558553    3797 start.go:547] Will try again in 5 seconds ...
I1117 14:29:23.490472    3797 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.923720943s)
I1117 14:29:23.490488    3797 kic.go:188] duration metric: took 5.923890 seconds to extract preloaded images to volume
I1117 14:29:27.568571    3797 start.go:313] acquiring machines lock for functional-20211117142648-2140: {Name:mk0ffa36ccb8092a6f2338223436899c154ee29e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 14:29:27.568718    3797 start.go:317] acquired machines lock for "functional-20211117142648-2140" in 127.45µs
I1117 14:29:27.568753    3797 start.go:93] Skipping create...Using existing machine configuration
I1117 14:29:27.568758    3797 fix.go:55] fixHost starting: 
I1117 14:29:27.569179    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:27.683038    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:27.683071    3797 fix.go:108] recreateIfNeeded on functional-20211117142648-2140: state= err=unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:27.683078    3797 fix.go:113] machineExists: false. err=machine does not exist
I1117 14:29:27.731919    3797 out.go:176] * docker "functional-20211117142648-2140" container is missing, will recreate.
I1117 14:29:27.731944    3797 delete.go:124] DEMOLISHING functional-20211117142648-2140 ...
I1117 14:29:27.732141    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:27.840313    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
W1117 14:29:27.840353    3797 stop.go:75] unable to get state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:27.840363    3797 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:27.841673    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:27.950637    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:27.950673    3797 delete.go:82] Unable to get host status for functional-20211117142648-2140, assuming it has already been deleted: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:27.950773    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
W1117 14:29:28.060256    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
I1117 14:29:28.060281    3797 kic.go:360] could not find the container functional-20211117142648-2140 to remove it. will try anyways
I1117 14:29:28.060396    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:28.167170    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
W1117 14:29:28.167205    3797 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:28.167311    3797 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0"
W1117 14:29:28.276287    3797 cli_runner.go:162] docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 14:29:28.276303    3797 oci.go:658] error shutdown functional-20211117142648-2140: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:29.278223    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:29.389337    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:29.389369    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:29.389376    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:29.389393    3797 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:29.786102    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:29.898246    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:29.898277    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:29.898285    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:29.898304    3797 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:30.496761    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:30.607973    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:30.608013    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:30.608030    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:30.608052    3797 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:31.936732    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:32.051601    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:32.051637    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:32.051651    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:32.051670    3797 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:33.264454    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:33.374141    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:33.374177    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:33.374189    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:33.374206    3797 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:35.158716    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:35.265863    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:35.265895    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:35.265910    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:35.265931    3797 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:38.535473    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:38.647715    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:38.647755    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:38.647764    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:38.647780    3797 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:44.750001    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:44.865258    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:44.865291    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:44.865296    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:44.865318    3797 oci.go:87] couldn't shut down functional-20211117142648-2140 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
I1117 14:29:44.865428    3797 cli_runner.go:115] Run: docker rm -f -v functional-20211117142648-2140
I1117 14:29:44.974014    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
W1117 14:29:45.083280    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
I1117 14:29:45.083390    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 14:29:45.195503    3797 cli_runner.go:115] Run: docker network rm functional-20211117142648-2140
I1117 14:29:48.047685    3797 cli_runner.go:168] Completed: docker network rm functional-20211117142648-2140: (2.852071943s)
W1117 14:29:48.047961    3797 delete.go:139] delete failed (probably ok) <nil>
I1117 14:29:48.047965    3797 fix.go:120] Sleeping 1 second for extra luck!
I1117 14:29:49.048256    3797 start.go:126] createHost starting for "" (driver="docker")
I1117 14:29:49.075621    3797 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 14:29:49.075837    3797 start.go:160] libmachine.API.Create for "functional-20211117142648-2140" (driver="docker")
I1117 14:29:49.075894    3797 client.go:168] LocalClient.Create starting
I1117 14:29:49.076071    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
I1117 14:29:49.076148    3797 main.go:130] libmachine: Decoding PEM data...
I1117 14:29:49.076172    3797 main.go:130] libmachine: Parsing certificate...
I1117 14:29:49.076267    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
I1117 14:29:49.076321    3797 main.go:130] libmachine: Decoding PEM data...
I1117 14:29:49.076339    3797 main.go:130] libmachine: Parsing certificate...
I1117 14:29:49.098184    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1117 14:29:49.208567    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1117 14:29:49.208705    3797 network_create.go:254] running [docker network inspect functional-20211117142648-2140] to gather additional debugging logs...
I1117 14:29:49.208724    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140
W1117 14:29:49.317875    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 returned with exit code 1
I1117 14:29:49.317896    3797 network_create.go:257] error running [docker network inspect functional-20211117142648-2140]: docker network inspect functional-20211117142648-2140: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20211117142648-2140
I1117 14:29:49.317907    3797 network_create.go:259] output of [docker network inspect functional-20211117142648-2140]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20211117142648-2140

                                                
                                                
** /stderr **
I1117 14:29:49.318014    3797 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 14:29:49.426575    3797 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001321b8] amended:false}} dirty:map[] misses:0}
I1117 14:29:49.426600    3797 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 14:29:49.426769    3797 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001321b8] amended:true}} dirty:map[192.168.49.0:0xc0001321b8 192.168.58.0:0xc000186290] misses:0}
I1117 14:29:49.426781    3797 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 14:29:49.426786    3797 network_create.go:106] attempt to create docker network functional-20211117142648-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1117 14:29:49.426872    3797 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140
I1117 14:29:53.299742    3797 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140: (3.872719387s)
I1117 14:29:53.299761    3797 network_create.go:90] docker network functional-20211117142648-2140 192.168.58.0/24 created
I1117 14:29:53.299783    3797 kic.go:106] calculated static IP "192.168.58.2" for the "functional-20211117142648-2140" container
I1117 14:29:53.299896    3797 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 14:29:53.409808    3797 cli_runner.go:115] Run: docker volume create functional-20211117142648-2140 --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --label created_by.minikube.sigs.k8s.io=true
I1117 14:29:53.515033    3797 oci.go:102] Successfully created a docker volume functional-20211117142648-2140
I1117 14:29:53.515165    3797 cli_runner.go:115] Run: docker run --rm --name functional-20211117142648-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --entrypoint /usr/bin/test -v functional-20211117142648-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 14:29:53.943240    3797 oci.go:106] Successfully prepared a docker volume functional-20211117142648-2140
E1117 14:29:53.943284    3797 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
I1117 14:29:53.943289    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 14:29:53.943298    3797 client.go:171] LocalClient.Create took 4.867288969s
I1117 14:29:53.943310    3797 kic.go:179] Starting extracting preloaded images to volume ...
I1117 14:29:53.943404    3797 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 14:29:55.945897    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 14:29:55.945981    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:56.083098    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:56.083184    3797 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:56.281845    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:56.398584    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:56.398759    3797 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:56.704670    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:56.826451    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:56.826537    3797 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:57.531465    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:57.650557    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
W1117 14:29:57.650654    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:29:57.650688    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:57.650703    3797 start.go:129] duration metric: createHost completed in 8.602237402s
I1117 14:29:57.650776    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 14:29:57.650843    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:57.796584    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:57.796668    3797 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:58.138508    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:58.282404    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:58.282484    3797 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:58.731488    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:58.858988    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:58.859067    3797 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:59.437319    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:59.547536    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
W1117 14:29:59.547624    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:29:59.547635    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:59.547645    3797 fix.go:57] fixHost completed within 31.978150812s
I1117 14:29:59.547654    3797 start.go:80] releasing machines lock for "functional-20211117142648-2140", held for 31.978191455s
W1117 14:29:59.547789    3797 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117142648-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 14:29:59.601127    3797 out.go:176] 
W1117 14:29:59.601390    3797 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 14:29:59.601403    3797 out.go:241] * 
W1117 14:29:59.602581    3797 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
* 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (0.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1190: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20211117142648-21401651359746/logs.txt
functional_test.go:1190: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20211117142648-21401651359746/logs.txt: exit status 80 (420.32613ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1192: out/minikube-darwin-amd64 -p functional-20211117142648-2140 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20211117142648-21401651359746/logs.txt failed: exit status 80
functional_test.go:1195: expected empty minikube logs output, but got: 
***
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr *****
functional_test.go:1165: expected minikube logs to include word: -"Linux"- but got 
**** 
* ==> Audit <==
* |---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command |                           Args                           |               Profile               |  User   | Version |          Start Time           |           End Time            |
|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|
| delete  | --all                                                    | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:08 PST | Wed, 17 Nov 2021 14:24:09 PST |
| delete  | -p                                                       | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:09 PST | Wed, 17 Nov 2021 14:24:10 PST |
|         | download-only-20211117142321-2140                        |                                     |         |         |                               |                               |
| delete  | -p                                                       | download-only-20211117142321-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:10 PST | Wed, 17 Nov 2021 14:24:10 PST |
|         | download-only-20211117142321-2140                        |                                     |         |         |                               |                               |
| delete  | -p                                                       | download-docker-20211117142410-2140 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:24:19 PST | Wed, 17 Nov 2021 14:24:20 PST |
|         | download-docker-20211117142410-2140                      |                                     |         |         |                               |                               |
| delete  | -p addons-20211117142420-2140                            | addons-20211117142420-2140          | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:25:06 PST | Wed, 17 Nov 2021 14:25:10 PST |
| delete  | -p nospam-20211117142510-2140                            | nospam-20211117142510-2140          | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:26:44 PST | Wed, 17 Nov 2021 14:26:48 PST |
| -p      | functional-20211117142648-2140 cache add                 | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:46 PST | Wed, 17 Nov 2021 14:28:47 PST |
|         | minikube-local-cache-test:functional-20211117142648-2140 |                                     |         |         |                               |                               |
| -p      | functional-20211117142648-2140 cache delete              | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:47 PST | Wed, 17 Nov 2021 14:28:47 PST |
|         | minikube-local-cache-test:functional-20211117142648-2140 |                                     |         |         |                               |                               |
| cache   | list                                                     | minikube                            | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:47 PST | Wed, 17 Nov 2021 14:28:47 PST |
| -p      | functional-20211117142648-2140                           | functional-20211117142648-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:28:48 PST | Wed, 17 Nov 2021 14:28:48 PST |
|         | cache reload                                             |                                     |         |         |                               |                               |
|---------|----------------------------------------------------------|-------------------------------------|---------|---------|-------------------------------|-------------------------------|

                                                
                                                
* 
* ==> Last Start <==
* Log file created at: 2021/11/17 14:28:50
Running on machine: 37310
Binary: Built with gc go1.17.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1117 14:28:50.453976    3797 out.go:297] Setting OutFile to fd 1 ...
I1117 14:28:50.454101    3797 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 14:28:50.454104    3797 out.go:310] Setting ErrFile to fd 2...
I1117 14:28:50.454106    3797 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 14:28:50.454178    3797 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
I1117 14:28:50.454455    3797 out.go:304] Setting JSON to false
I1117 14:28:50.479425    3797 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1705,"bootTime":1637186425,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W1117 14:28:50.479515    3797 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1117 14:28:50.506691    3797 out.go:176] * [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
I1117 14:28:50.506942    3797 notify.go:174] Checking for updates...
I1117 14:28:50.554344    3797 out.go:176]   - MINIKUBE_LOCATION=12739
I1117 14:28:50.580007    3797 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
I1117 14:28:50.606367    3797 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
I1117 14:28:50.632166    3797 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
I1117 14:28:50.632507    3797 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 14:28:50.632539    3797 driver.go:343] Setting default libvirt URI to qemu:///system
I1117 14:28:50.727825    3797 docker.go:132] docker version: linux-20.10.6
I1117 14:28:50.727938    3797 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 14:28:50.904208    3797 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 22:28:50.843477149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
I1117 14:28:50.952845    3797 out.go:176] * Using the docker driver based on existing profile
I1117 14:28:50.952891    3797 start.go:280] selected driver: docker
I1117 14:28:50.952900    3797 start.go:775] validating driver "docker" against &{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1117 14:28:50.953010    3797 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1117 14:28:50.953389    3797 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1117 14:28:51.130521    3797 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:48 SystemTime:2021-11-17 22:28:51.070382389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
I1117 14:28:51.132531    3797 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1117 14:28:51.132556    3797 cni.go:93] Creating CNI manager for ""
I1117 14:28:51.132561    3797 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I1117 14:28:51.132572    3797 start_flags.go:282] config:
{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1117 14:28:51.181173    3797 out.go:176] * Starting control plane node functional-20211117142648-2140 in cluster functional-20211117142648-2140
I1117 14:28:51.181244    3797 cache.go:118] Beginning downloading kic base image for docker with docker
I1117 14:28:51.207291    3797 out.go:176] * Pulling base image ...
I1117 14:28:51.207344    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 14:28:51.207423    3797 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
I1117 14:28:51.207444    3797 cache.go:57] Caching tarball of preloaded images
I1117 14:28:51.207450    3797 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I1117 14:28:51.207661    3797 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1117 14:28:51.207678    3797 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
I1117 14:28:51.208383    3797 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/functional-20211117142648-2140/config.json ...
I1117 14:28:51.325648    3797 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I1117 14:28:51.325656    3797 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I1117 14:28:51.325664    3797 cache.go:206] Successfully downloaded all kic artifacts
I1117 14:28:51.325712    3797 start.go:313] acquiring machines lock for functional-20211117142648-2140: {Name:mk0ffa36ccb8092a6f2338223436899c154ee29e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 14:28:51.325787    3797 start.go:317] acquired machines lock for "functional-20211117142648-2140" in 59.856µs
I1117 14:28:51.325808    3797 start.go:93] Skipping create...Using existing machine configuration
I1117 14:28:51.325812    3797 fix.go:55] fixHost starting: 
I1117 14:28:51.326074    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:51.433132    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:51.433190    3797 fix.go:108] recreateIfNeeded on functional-20211117142648-2140: state= err=unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.433214    3797 fix.go:113] machineExists: false. err=machine does not exist
I1117 14:28:51.460197    3797 out.go:176] * docker "functional-20211117142648-2140" container is missing, will recreate.
I1117 14:28:51.460232    3797 delete.go:124] DEMOLISHING functional-20211117142648-2140 ...
I1117 14:28:51.460438    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:51.568262    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
W1117 14:28:51.568308    3797 stop.go:75] unable to get state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.568320    3797 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.568707    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:51.681084    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:51.681119    3797 delete.go:82] Unable to get host status for functional-20211117142648-2140, assuming it has already been deleted: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.681215    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
W1117 14:28:51.790216    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
I1117 14:28:51.790245    3797 kic.go:360] could not find the container functional-20211117142648-2140 to remove it. will try anyways
I1117 14:28:51.790342    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:51.896671    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
W1117 14:28:51.896705    3797 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:51.896802    3797 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0"
W1117 14:28:52.022260    3797 cli_runner.go:162] docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 14:28:52.022289    3797 oci.go:658] error shutdown functional-20211117142648-2140: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:53.022960    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:53.131118    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:53.131161    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:53.131173    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:53.131206    3797 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:53.684125    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:53.793298    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:53.793336    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:53.793344    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:53.793361    3797 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:54.882327    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:54.993642    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:54.993682    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:54.993692    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:54.993712    3797 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:56.311177    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:56.421198    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:56.421230    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:56.421235    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:56.421254    3797 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:58.005567    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:28:58.113113    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:28:58.113150    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:28:58.113161    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:28:58.113197    3797 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:00.459602    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:00.571126    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:00.571164    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:00.571172    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:00.571190    3797 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:05.080127    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:05.195193    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:05.195225    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:05.195242    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:05.195261    3797 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:08.422668    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:08.537181    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:08.537214    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:08.537220    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:08.537240    3797 oci.go:87] couldn't shut down functional-20211117142648-2140 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
I1117 14:29:08.537331    3797 cli_runner.go:115] Run: docker rm -f -v functional-20211117142648-2140
I1117 14:29:08.645821    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
W1117 14:29:08.752869    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
I1117 14:29:08.752987    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 14:29:08.858990    3797 cli_runner.go:115] Run: docker network rm functional-20211117142648-2140
I1117 14:29:11.593581    3797 cli_runner.go:168] Completed: docker network rm functional-20211117142648-2140: (2.734480004s)
W1117 14:29:11.593836    3797 delete.go:139] delete failed (probably ok) <nil>
I1117 14:29:11.593840    3797 fix.go:120] Sleeping 1 second for extra luck!
I1117 14:29:12.602577    3797 start.go:126] createHost starting for "" (driver="docker")
I1117 14:29:12.651696    3797 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 14:29:12.651906    3797 start.go:160] libmachine.API.Create for "functional-20211117142648-2140" (driver="docker")
I1117 14:29:12.651941    3797 client.go:168] LocalClient.Create starting
I1117 14:29:12.652118    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
I1117 14:29:12.652191    3797 main.go:130] libmachine: Decoding PEM data...
I1117 14:29:12.652216    3797 main.go:130] libmachine: Parsing certificate...
I1117 14:29:12.652330    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
I1117 14:29:12.652377    3797 main.go:130] libmachine: Decoding PEM data...
I1117 14:29:12.652388    3797 main.go:130] libmachine: Parsing certificate...
I1117 14:29:12.653472    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1117 14:29:12.763983    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1117 14:29:12.764074    3797 network_create.go:254] running [docker network inspect functional-20211117142648-2140] to gather additional debugging logs...
I1117 14:29:12.764090    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140
W1117 14:29:12.872980    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 returned with exit code 1
I1117 14:29:12.872997    3797 network_create.go:257] error running [docker network inspect functional-20211117142648-2140]: docker network inspect functional-20211117142648-2140: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20211117142648-2140
I1117 14:29:12.873008    3797 network_create.go:259] output of [docker network inspect functional-20211117142648-2140]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20211117142648-2140

                                                
                                                
** /stderr **
I1117 14:29:12.873089    3797 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 14:29:12.982933    3797 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001321b8] misses:0}
I1117 14:29:12.982963    3797 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 14:29:12.982977    3797 network_create.go:106] attempt to create docker network functional-20211117142648-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1117 14:29:12.983057    3797 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140
I1117 14:29:16.892559    3797 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140: (3.909365408s)
I1117 14:29:16.892578    3797 network_create.go:90] docker network functional-20211117142648-2140 192.168.49.0/24 created
I1117 14:29:16.892596    3797 kic.go:106] calculated static IP "192.168.49.2" for the "functional-20211117142648-2140" container
I1117 14:29:16.892699    3797 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 14:29:17.020685    3797 cli_runner.go:115] Run: docker volume create functional-20211117142648-2140 --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --label created_by.minikube.sigs.k8s.io=true
I1117 14:29:17.129972    3797 oci.go:102] Successfully created a docker volume functional-20211117142648-2140
I1117 14:29:17.130089    3797 cli_runner.go:115] Run: docker run --rm --name functional-20211117142648-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --entrypoint /usr/bin/test -v functional-20211117142648-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 14:29:17.566365    3797 oci.go:106] Successfully prepared a docker volume functional-20211117142648-2140
E1117 14:29:17.566432    3797 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
I1117 14:29:17.566436    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 14:29:17.566457    3797 client.go:171] LocalClient.Create took 4.914397556s
I1117 14:29:17.566461    3797 kic.go:179] Starting extracting preloaded images to volume ...
I1117 14:29:17.566568    3797 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 14:29:19.568044    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 14:29:19.568135    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:19.704301    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:19.704396    3797 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:19.853899    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:19.973058    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:19.973140    3797 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:20.278372    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:20.398567    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:20.398644    3797 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:20.969945    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:21.092330    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
W1117 14:29:21.092412    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:29:21.092425    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:21.092433    3797 start.go:129] duration metric: createHost completed in 8.489649896s
I1117 14:29:21.092493    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 14:29:21.092557    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:21.212184    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:21.212259    3797 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:21.391281    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:21.515478    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:21.515547    3797 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:21.846045    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:21.975168    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:21.975244    3797 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:22.441449    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:22.558262    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
W1117 14:29:22.558335    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:29:22.558346    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:22.558362    3797 fix.go:57] fixHost completed within 31.231821737s
I1117 14:29:22.558368    3797 start.go:80] releasing machines lock for "functional-20211117142648-2140", held for 31.231855741s
W1117 14:29:22.558382    3797 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 14:29:22.558543    3797 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 14:29:22.558553    3797 start.go:547] Will try again in 5 seconds ...
I1117 14:29:23.490472    3797 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.923720943s)
I1117 14:29:23.490488    3797 kic.go:188] duration metric: took 5.923890 seconds to extract preloaded images to volume
I1117 14:29:27.568571    3797 start.go:313] acquiring machines lock for functional-20211117142648-2140: {Name:mk0ffa36ccb8092a6f2338223436899c154ee29e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1117 14:29:27.568718    3797 start.go:317] acquired machines lock for "functional-20211117142648-2140" in 127.45µs
I1117 14:29:27.568753    3797 start.go:93] Skipping create...Using existing machine configuration
I1117 14:29:27.568758    3797 fix.go:55] fixHost starting: 
I1117 14:29:27.569179    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:27.683038    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:27.683071    3797 fix.go:108] recreateIfNeeded on functional-20211117142648-2140: state= err=unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:27.683078    3797 fix.go:113] machineExists: false. err=machine does not exist
I1117 14:29:27.731919    3797 out.go:176] * docker "functional-20211117142648-2140" container is missing, will recreate.
I1117 14:29:27.731944    3797 delete.go:124] DEMOLISHING functional-20211117142648-2140 ...
I1117 14:29:27.732141    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:27.840313    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
W1117 14:29:27.840353    3797 stop.go:75] unable to get state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:27.840363    3797 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:27.841673    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:27.950637    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:27.950673    3797 delete.go:82] Unable to get host status for functional-20211117142648-2140, assuming it has already been deleted: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:27.950773    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
W1117 14:29:28.060256    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
I1117 14:29:28.060281    3797 kic.go:360] could not find the container functional-20211117142648-2140 to remove it. will try anyways
I1117 14:29:28.060396    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:28.167170    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
W1117 14:29:28.167205    3797 oci.go:83] error getting container status, will try to delete anyways: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:28.167311    3797 cli_runner.go:115] Run: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0"
W1117 14:29:28.276287    3797 cli_runner.go:162] docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0" returned with exit code 1
I1117 14:29:28.276303    3797 oci.go:658] error shutdown functional-20211117142648-2140: docker exec --privileged -t functional-20211117142648-2140 /bin/bash -c "sudo init 0": exit status 1
stdout:

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:29.278223    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:29.389337    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:29.389369    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:29.389376    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:29.389393    3797 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:29.786102    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:29.898246    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:29.898277    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:29.898285    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:29.898304    3797 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:30.496761    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:30.607973    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:30.608013    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:30.608030    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:30.608052    3797 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:31.936732    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:32.051601    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:32.051637    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:32.051651    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:32.051670    3797 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:33.264454    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:33.374141    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:33.374177    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:33.374189    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:33.374206    3797 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:35.158716    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:35.265863    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:35.265895    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:35.265910    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:35.265931    3797 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:38.535473    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:38.647715    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:38.647755    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:38.647764    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:38.647780    3797 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:44.750001    3797 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:29:44.865258    3797 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:29:44.865291    3797 oci.go:670] temporary error verifying shutdown: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:44.865296    3797 oci.go:672] temporary error: container functional-20211117142648-2140 status is  but expect it to be exited
I1117 14:29:44.865318    3797 oci.go:87] couldn't shut down functional-20211117142648-2140 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
I1117 14:29:44.865428    3797 cli_runner.go:115] Run: docker rm -f -v functional-20211117142648-2140
I1117 14:29:44.974014    3797 cli_runner.go:115] Run: docker container inspect -f {{.Id}} functional-20211117142648-2140
W1117 14:29:45.083280    3797 cli_runner.go:162] docker container inspect -f {{.Id}} functional-20211117142648-2140 returned with exit code 1
I1117 14:29:45.083390    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 14:29:45.195503    3797 cli_runner.go:115] Run: docker network rm functional-20211117142648-2140
I1117 14:29:48.047685    3797 cli_runner.go:168] Completed: docker network rm functional-20211117142648-2140: (2.852071943s)
W1117 14:29:48.047961    3797 delete.go:139] delete failed (probably ok) <nil>
I1117 14:29:48.047965    3797 fix.go:120] Sleeping 1 second for extra luck!
I1117 14:29:49.048256    3797 start.go:126] createHost starting for "" (driver="docker")
I1117 14:29:49.075621    3797 out.go:203] * Creating docker container (CPUs=2, Memory=4000MB) ...
I1117 14:29:49.075837    3797 start.go:160] libmachine.API.Create for "functional-20211117142648-2140" (driver="docker")
I1117 14:29:49.075894    3797 client.go:168] LocalClient.Create starting
I1117 14:29:49.076071    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
I1117 14:29:49.076148    3797 main.go:130] libmachine: Decoding PEM data...
I1117 14:29:49.076172    3797 main.go:130] libmachine: Parsing certificate...
I1117 14:29:49.076267    3797 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
I1117 14:29:49.076321    3797 main.go:130] libmachine: Decoding PEM data...
I1117 14:29:49.076339    3797 main.go:130] libmachine: Parsing certificate...
I1117 14:29:49.098184    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1117 14:29:49.208567    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1117 14:29:49.208705    3797 network_create.go:254] running [docker network inspect functional-20211117142648-2140] to gather additional debugging logs...
I1117 14:29:49.208724    3797 cli_runner.go:115] Run: docker network inspect functional-20211117142648-2140
W1117 14:29:49.317875    3797 cli_runner.go:162] docker network inspect functional-20211117142648-2140 returned with exit code 1
I1117 14:29:49.317896    3797 network_create.go:257] error running [docker network inspect functional-20211117142648-2140]: docker network inspect functional-20211117142648-2140: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error: No such network: functional-20211117142648-2140
I1117 14:29:49.317907    3797 network_create.go:259] output of [docker network inspect functional-20211117142648-2140]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error: No such network: functional-20211117142648-2140

                                                
                                                
** /stderr **
I1117 14:29:49.318014    3797 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1117 14:29:49.426575    3797 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001321b8] amended:false}} dirty:map[] misses:0}
I1117 14:29:49.426600    3797 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 14:29:49.426769    3797 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0001321b8] amended:true}} dirty:map[192.168.49.0:0xc0001321b8 192.168.58.0:0xc000186290] misses:0}
I1117 14:29:49.426781    3797 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1117 14:29:49.426786    3797 network_create.go:106] attempt to create docker network functional-20211117142648-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1117 14:29:49.426872    3797 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140
I1117 14:29:53.299742    3797 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true functional-20211117142648-2140: (3.872719387s)
I1117 14:29:53.299761    3797 network_create.go:90] docker network functional-20211117142648-2140 192.168.58.0/24 created
I1117 14:29:53.299783    3797 kic.go:106] calculated static IP "192.168.58.2" for the "functional-20211117142648-2140" container
I1117 14:29:53.299896    3797 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1117 14:29:53.409808    3797 cli_runner.go:115] Run: docker volume create functional-20211117142648-2140 --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --label created_by.minikube.sigs.k8s.io=true
I1117 14:29:53.515033    3797 oci.go:102] Successfully created a docker volume functional-20211117142648-2140
I1117 14:29:53.515165    3797 cli_runner.go:115] Run: docker run --rm --name functional-20211117142648-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-20211117142648-2140 --entrypoint /usr/bin/test -v functional-20211117142648-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1117 14:29:53.943240    3797 oci.go:106] Successfully prepared a docker volume functional-20211117142648-2140
E1117 14:29:53.943284    3797 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
I1117 14:29:53.943289    3797 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
I1117 14:29:53.943298    3797 client.go:171] LocalClient.Create took 4.867288969s
I1117 14:29:53.943310    3797 kic.go:179] Starting extracting preloaded images to volume ...
I1117 14:29:53.943404    3797 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-20211117142648-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1117 14:29:55.945897    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 14:29:55.945981    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:56.083098    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:56.083184    3797 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:56.281845    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:56.398584    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:56.398759    3797 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:56.704670    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:56.826451    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:56.826537    3797 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:57.531465    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:57.650557    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
W1117 14:29:57.650654    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:29:57.650688    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:57.650703    3797 start.go:129] duration metric: createHost completed in 8.602237402s
I1117 14:29:57.650776    3797 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1117 14:29:57.650843    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:57.796584    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:57.796668    3797 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:58.138508    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:58.282404    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:58.282484    3797 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:58.731488    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:58.858988    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
I1117 14:29:58.859067    3797 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:59.437319    3797 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140
W1117 14:29:59.547536    3797 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140 returned with exit code 1
W1117 14:29:59.547624    3797 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:29:59.547635    3797 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "functional-20211117142648-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211117142648-2140: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140
I1117 14:29:59.547645    3797 fix.go:57] fixHost completed within 31.978150812s
I1117 14:29:59.547654    3797 start.go:80] releasing machines lock for "functional-20211117142648-2140", held for 31.978191455s
W1117 14:29:59.547789    3797 out.go:241] * Failed to start docker container. Running "minikube delete -p functional-20211117142648-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
I1117 14:29:59.601127    3797 out.go:176] 
W1117 14:29:59.601390    3797 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
W1117 14:29:59.601403    3797 out.go:241] * 
W1117 14:29:59.602581    3797 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
* 
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:847: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117142648-2140 --alsologtostderr -v=1]
functional_test.go:860: output didn't produce a URL
functional_test.go:852: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117142648-2140 --alsologtostderr -v=1] ...
functional_test.go:852: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117142648-2140 --alsologtostderr -v=1] stdout:
functional_test.go:852: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211117142648-2140 --alsologtostderr -v=1] stderr:
I1117 14:30:38.789911    4590 out.go:297] Setting OutFile to fd 1 ...
I1117 14:30:38.790140    4590 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 14:30:38.790146    4590 out.go:310] Setting ErrFile to fd 2...
I1117 14:30:38.790149    4590 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 14:30:38.790216    4590 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
I1117 14:30:38.790392    4590 mustload.go:65] Loading cluster: functional-20211117142648-2140
I1117 14:30:38.790623    4590 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 14:30:38.790965    4590 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
W1117 14:30:38.900707    4590 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
I1117 14:30:38.928036    4590 out.go:176] 
W1117 14:30:38.929149    4590 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
stdout:

                                                
                                                

                                                
                                                
stderr:
Error: No such container: functional-20211117142648-2140

                                                
                                                
W1117 14:30:38.929176    4590 out.go:241] * 
* 
W1117 14:30:38.932222    4590 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                              │
│    * If the above advice does not help, please let us know:                                                                  │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                                │
│                                                                                                                              │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                     │
│    * Please also attach the following file to the GitHub issue:                                                              │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log    │
│                                                                                                                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                              │
│    * If the above advice does not help, please let us know:                                                                  │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                                │
│                                                                                                                              │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                     │
│    * Please also attach the following file to the GitHub issue:                                                              │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log    │
│                                                                                                                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1117 14:30:38.953528    4590 out.go:176] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (154.912054ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:39.334167    4601 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:796: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 status
functional_test.go:796: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 status: exit status 7 (149.790658ms)

                                                
                                                
-- stdout --
	functional-20211117142648-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:36.835542    4526 status.go:258] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	E1117 14:30:36.835550    4526 status.go:261] The "functional-20211117142648-2140" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:798: failed to run minikube status. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 status" : exit status 7
functional_test.go:802: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:802: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (189.399589ms)

                                                
                                                
-- stdout --
	host:Nonexistent,kublet:Nonexistent,apiserver:Nonexistent,kubeconfig:Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:37.025135    4531 status.go:258] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	E1117 14:30:37.025143    4531 status.go:261] The "functional-20211117142648-2140" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:804: failed to run minikube status with custom format: args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:814: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 status -o json
functional_test.go:814: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 status -o json: exit status 7 (148.330415ms)

                                                
                                                
-- stdout --
	{"Name":"functional-20211117142648-2140","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:37.173695    4536 status.go:258] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	E1117 14:30:37.173703    4536 status.go:261] The "functional-20211117142648-2140" host does not exist!

                                                
                                                
** /stderr **
functional_test.go:816: failed to run minikube status with json output. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (149.451124ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:37.436170    4545 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1372: (dbg) Run:  kubectl --context functional-20211117142648-2140 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1372: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8: exit status 1 (38.179566ms)

                                                
                                                
** stderr ** 
	W1117 14:30:10.176481    4403 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	error: context "functional-20211117142648-2140" does not exist

                                                
                                                
** /stderr **
functional_test.go:1376: failed to create hello-node deployment with this command "kubectl --context functional-20211117142648-2140 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8": exit status 1.
functional_test.go:1341: service test failed - dumping debug information
functional_test.go:1342: -----------------------service failure post-mortem--------------------------------
functional_test.go:1345: (dbg) Run:  kubectl --context functional-20211117142648-2140 describe po hello-node
functional_test.go:1345: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 describe po hello-node: exit status 1 (37.075495ms)

                                                
                                                
** stderr ** 
	W1117 14:30:10.213754    4404 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test.go:1347: "kubectl --context functional-20211117142648-2140 describe po hello-node" failed: exit status 1
functional_test.go:1349: hello-node pod describe:
functional_test.go:1351: (dbg) Run:  kubectl --context functional-20211117142648-2140 logs -l app=hello-node
functional_test.go:1351: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 logs -l app=hello-node: exit status 1 (39.497729ms)

                                                
                                                
** stderr ** 
	W1117 14:30:10.253407    4405 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test.go:1353: "kubectl --context functional-20211117142648-2140 logs -l app=hello-node" failed: exit status 1
functional_test.go:1355: hello-node logs:
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20211117142648-2140 describe svc hello-node
functional_test.go:1357: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 describe svc hello-node: exit status 1 (37.531296ms)

                                                
                                                
** stderr ** 
	W1117 14:30:10.291083    4406 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test.go:1359: "kubectl --context functional-20211117142648-2140 describe svc hello-node" failed: exit status 1
functional_test.go:1361: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (146.592939ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:10.550005    4411 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/ServiceCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:46: failed waiting for storage-provisioner: client config: context "functional-20211117142648-2140" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (148.764717ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:09.867049    4392 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1517: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "echo hello"
functional_test.go:1517: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "echo hello": exit status 80 (239.866536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_d94a149758de690cb366888a5d8e6efc18cafe43_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1522: failed to run an ssh command. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"echo hello\"" : exit status 80
functional_test.go:1526: expected minikube ssh command output to be -"hello"- but got *"\n\n"*. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"echo hello\""
functional_test.go:1534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "cat /etc/hostname"
functional_test.go:1534: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "cat /etc/hostname": exit status 80 (236.27742ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_e38561299ab5d398426b8e3871f2ff03f1313dcf_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1540: failed to run an ssh command. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"cat /etc/hostname\"" : exit status 80
functional_test.go:1544: expected minikube ssh command output to be -"functional-20211117142648-2140"- but got *"\n\n"*. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/SSHCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (173.725885ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:07.050156    4328 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 80 (254.914488ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_cp_432bb5e0f05c08c5a04418a53c078916822175f9_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 80
helpers_test.go:548: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:548: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /home/docker/cp-test.txt": exit status 80 (244.703147ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_config_781f3cd1cf37d148e80d99dc3fcd6332d76c85a9_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:553: failed to run an cp command. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo cat /home/docker/cp-test.txt\"" : exit status 80
helpers_test.go:562: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"\n\n",
)
--- FAIL: TestFunctional/parallel/CpCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1571: (dbg) Run:  kubectl --context functional-20211117142648-2140 replace --force -f testdata/mysql.yaml
functional_test.go:1571: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 replace --force -f testdata/mysql.yaml: exit status 1 (37.721938ms)

                                                
                                                
** stderr ** 
	W1117 14:30:04.689228    4265 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	error: context "functional-20211117142648-2140" does not exist

                                                
                                                
** /stderr **
functional_test.go:1573: failed to kubectl replace mysql: args "kubectl --context functional-20211117142648-2140 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (147.006299ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:04.944925    4270 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/MySQL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1707: Checking for existence of /etc/test/nested/copy/2140/hosts within VM
functional_test.go:1709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/test/nested/copy/2140/hosts"
functional_test.go:1709: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/test/nested/copy/2140/hosts": exit status 80 (206.850538ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_f5725c30df3eeb98dbeae6c98cedf61febba4dd5_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1711: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/test/nested/copy/2140/hosts" failed: exit status 80
functional_test.go:1714: file sync test content: 

                                                
                                                
functional_test.go:1724: /etc/sync.test content mismatch (-want +got):
string(
- 	"Test file for checking file sync process",
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/FileSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (146.466631ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:04.651154    4260 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/FileSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1750: Checking for existence of /etc/ssl/certs/2140.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/ssl/certs/2140.pem"
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/ssl/certs/2140.pem": exit status 80 (199.8546ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_f8e086ec0f036120f96c42ed39b733ea9a353d67_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/etc/ssl/certs/2140.pem" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo cat /etc/ssl/certs/2140.pem\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/2140.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1750: Checking for existence of /usr/share/ca-certificates/2140.pem within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /usr/share/ca-certificates/2140.pem"
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /usr/share/ca-certificates/2140.pem": exit status 80 (199.229015ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_63f11b4667188dccab81af3347149cfc50c7cf7b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/usr/share/ca-certificates/2140.pem" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo cat /usr/share/ca-certificates/2140.pem\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/2140.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1750: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1751: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1751: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 80 (200.352799ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_c1fb1ee25ebb7a3edd1a0da000c23bf1f788dc55_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1753: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 80
functional_test.go:1759: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC\nVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x\nETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD\nVQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3D"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /etc/ssl/certs/21402.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/ssl/certs/21402.pem"
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/ssl/certs/21402.pem": exit status 80 (199.479635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_8d19e2e5f678eb75148382e529345a52c46da466_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/etc/ssl/certs/21402.pem" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo cat /etc/ssl/certs/21402.pem\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/21402.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /usr/share/ca-certificates/21402.pem within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /usr/share/ca-certificates/21402.pem"
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /usr/share/ca-certificates/21402.pem": exit status 80 (200.468972ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_48355850dcc35afbd3b1f9682c76cfabb9014084_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/usr/share/ca-certificates/21402.pem" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo cat /usr/share/ca-certificates/21402.pem\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/21402.pem mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
functional_test.go:1777: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1778: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1778: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 80 (199.634243ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_d951ceeafd466f9a2c9a0a1d19acaa146725ed7e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1780: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 80
functional_test.go:1786: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
string(
- 	"-----BEGIN CERTIFICATE-----\nMIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV\nUzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy\nMDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN\nBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCA"...,
+ 	"\n\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/CertSync]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (144.109178ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:04.183549    4246 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:213: (dbg) Run:  kubectl --context functional-20211117142648-2140 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:213: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (38.197866ms)

                                                
                                                
** stderr ** 
	W1117 14:30:02.011073    4186 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test.go:215: failed to 'kubectl get nodes' with args "kubectl --context functional-20211117142648-2140 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:221: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	W1117 14:30:02.011073    4186 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	W1117 14:30:02.011073    4186 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	W1117 14:30:02.011073    4186 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test.go:221: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	W1117 14:30:02.011073    4186 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20211117142648-2140
helpers_test.go:235: (dbg) docker inspect functional-20211117142648-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "functional-20211117142648-2140",
	        "Id": "0a4db5d986095fce21025ba51dff65dd2f051020fce3b8f38faf4a77983ca7a2",
	        "Created": "2021-11-17T22:29:49.540695246Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211117142648-2140 -n functional-20211117142648-2140: exit status 7 (146.056511ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:30:02.266659    4191 status.go:247] status error: host: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-20211117142648-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestFunctional/parallel/NodeLabels (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1805: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo systemctl is-active crio": exit status 80 (265.51757ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_6b7239aee4f25975002bb6e89d3a731164a5501d_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1808: output of 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_6b7239aee4f25975002bb6e89d3a731164a5501d_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **: exit status 80
functional_test.go:1811: For runtime "docker": expected "crio" to be inactive but got "\n\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2051: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 version -o=json --components
functional_test.go:2051: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 version -o=json --components: exit status 80 (204.201265ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_version_4aca586f1e1becae668b759539b2a1d01ad61d4e_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2053: error version: exit status 80
functional_test.go:2058: expected to see "buildctl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "commit" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "containerd" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crictl" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crio" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "ctr" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "docker" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "minikubeVersion" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "podman" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "run" in the minikube version --components but got:

                                                
                                                

                                                
                                                
functional_test.go:2058: expected to see "crun" in the minikube version --components but got:

                                                
                                                

                                                
                                                
--- FAIL: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageList (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageList
=== PAUSE TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image ls
functional_test.go:255: expected k8s.gcr.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageList (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh pgrep buildkitd
functional_test.go:264: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh pgrep buildkitd: exit status 80 (199.229502ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_90b035341dad3264896227ccd5ca14ead8f761a2_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image build -t localhost/my-image:functional-20211117142648-2140 testdata/build
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image ls
functional_test.go:384: expected "localhost/my-image:functional-20211117142648-2140" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:440: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211117142648-2140 docker-env) && out/minikube-darwin-amd64 status -p functional-20211117142648-2140"
functional_test.go:440: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211117142648-2140 docker-env) && out/minikube-darwin-amd64 status -p functional-20211117142648-2140": exit status 1 (205.744436ms)

                                                
                                                
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                               │
	│    * If the above advice does not help, please let us know:                                                                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                 │
	│                                                                                                                               │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                      │
	│    * Please also attach the following file to the GitHub issue:                                                               │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_docker-env_0286061359b7d88e1c575f824495f60db2866fdd_0.log    │
	│                                                                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:446: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2: exit status 80 (200.873846ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:30:39.670523    4612 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:30:39.670784    4612 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:39.670790    4612 out.go:310] Setting ErrFile to fd 2...
	I1117 14:30:39.670794    4612 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:39.670868    4612 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:30:39.671039    4612 mustload.go:65] Loading cluster: functional-20211117142648-2140
	I1117 14:30:39.671266    4612 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:30:39.671600    4612 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:30:39.778744    4612 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:30:39.806062    4612 out.go:176] 
	W1117 14:30:39.806260    4612 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:30:39.806278    4612 out.go:241] * 
	* 
	W1117 14:30:39.809403    4612 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:30:39.830534    4612 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2: exit status 80 (450.903241ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:30:40.080818    4622 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:30:40.081002    4622 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:40.081008    4622 out.go:310] Setting ErrFile to fd 2...
	I1117 14:30:40.081012    4622 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:40.081085    4622 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:30:40.081249    4622 mustload.go:65] Loading cluster: functional-20211117142648-2140
	I1117 14:30:40.081471    4622 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:30:40.081811    4622 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:30:40.439250    4622 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:30:40.466653    4622 out.go:176] 
	W1117 14:30:40.466859    4622 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:30:40.466875    4622 out.go:241] * 
	* 
	W1117 14:30:40.469916    4622 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:30:40.491286    4622 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1897: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2
functional_test.go:1897: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2: exit status 80 (207.870075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:30:39.871573    4617 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:30:39.871701    4617 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:39.871707    4617 out.go:310] Setting ErrFile to fd 2...
	I1117 14:30:39.871711    4617 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:39.871785    4617 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:30:39.871965    4617 mustload.go:65] Loading cluster: functional-20211117142648-2140
	I1117 14:30:39.872184    4617 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:30:39.872517    4617 cli_runner.go:115] Run: docker container inspect functional-20211117142648-2140 --format={{.State.Status}}
	W1117 14:30:39.986327    4617 cli_runner.go:162] docker container inspect functional-20211117142648-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:30:40.014774    4617 out.go:176] 
	W1117 14:30:40.014984    4617 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	W1117 14:30:40.014999    4617 out.go:241] * 
	* 
	W1117 14:30:40.018041    4617 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                   │
	│    * If the above advice does not help, please let us know:                                                                       │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                     │
	│                                                                                                                                   │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                          │
	│    * Please also attach the following file to the GitHub issue:                                                                   │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_update-context_e1a5dc549368a2315b01cd8c4caf3fe96e7daf2c_0.log    │
	│                                                                                                                                   │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:30:40.039364    4617 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:1899: failed to run minikube update-context: args "out/minikube-darwin-amd64 -p functional-20211117142648-2140 update-context --alsologtostderr -v=2": exit status 80
functional_test.go:1904: update-context: got="\n\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117142648-2140

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117142648-2140 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211117142648-2140: (2.164870784s)
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image ls
functional_test.go:384: expected "gcr.io/google-containers/addon-resizer:functional-20211117142648-2140" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:143: failed to get Kubernetes client for "functional-20211117142648-2140": client config: context "functional-20211117142648-2140" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (74.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:223: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:225: (dbg) Run:  kubectl --context functional-20211117142648-2140 get svc nginx-svc
functional_test_tunnel_test.go:225: (dbg) Non-zero exit: kubectl --context functional-20211117142648-2140 get svc nginx-svc: exit status 1 (38.267771ms)

                                                
                                                
** stderr ** 
	W1117 14:31:21.614819    4679 loader.go:223] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	Error in configuration: context was not found for specified context: functional-20211117142648-2140

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:227: kubectl --context functional-20211117142648-2140 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:229: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:236: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (74.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image save gcr.io/google-containers/addon-resizer:functional-20211117142648-2140 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:327: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image ls
functional_test.go:384: expected "gcr.io/google-containers/addon-resizer:functional-20211117142648-2140" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (52.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211117143126-2140 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
ingress_addon_legacy_test.go:40: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211117143126-2140 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 80 (52.574552182s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20211117143126-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node ingress-addon-legacy-20211117143126-2140 in cluster ingress-addon-legacy-20211117143126-2140
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* docker "ingress-addon-legacy-20211117143126-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:31:26.352862    4735 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:31:26.353052    4735 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:31:26.353058    4735 out.go:310] Setting ErrFile to fd 2...
	I1117 14:31:26.353062    4735 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:31:26.353128    4735 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:31:26.353434    4735 out.go:304] Setting JSON to false
	I1117 14:31:26.377805    4735 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1861,"bootTime":1637186425,"procs":347,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:31:26.377906    4735 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:31:26.404871    4735 out.go:176] * [ingress-addon-legacy-20211117143126-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:31:26.405014    4735 notify.go:174] Checking for updates...
	I1117 14:31:26.452629    4735 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:31:26.478807    4735 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:31:26.504767    4735 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:31:26.530333    4735 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:31:26.530515    4735 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:31:26.623120    4735 docker.go:132] docker version: linux-20.10.6
	I1117 14:31:26.623239    4735 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:31:26.796878    4735 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:31:26.741529731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:31:26.845651    4735 out.go:176] * Using the docker driver based on user configuration
	I1117 14:31:26.845714    4735 start.go:280] selected driver: docker
	I1117 14:31:26.845743    4735 start.go:775] validating driver "docker" against <nil>
	I1117 14:31:26.845766    4735 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:31:26.849097    4735 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:31:27.034457    4735 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:31:26.98025779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:31:27.034600    4735 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:31:27.034772    4735 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 14:31:27.034796    4735 cni.go:93] Creating CNI manager for ""
	I1117 14:31:27.034813    4735 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:31:27.034819    4735 start_flags.go:282] config:
	{Name:ingress-addon-legacy-20211117143126-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20211117143126-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:31:27.083787    4735 out.go:176] * Starting control plane node ingress-addon-legacy-20211117143126-2140 in cluster ingress-addon-legacy-20211117143126-2140
	I1117 14:31:27.083852    4735 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:31:27.110373    4735 out.go:176] * Pulling base image ...
	I1117 14:31:27.110446    4735 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 14:31:27.110528    4735 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:31:27.183002    4735 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1117 14:31:27.183034    4735 cache.go:57] Caching tarball of preloaded images
	I1117 14:31:27.183275    4735 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 14:31:27.209691    4735 out.go:176] * Downloading Kubernetes v1.18.20 preload ...
	I1117 14:31:27.209732    4735 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 14:31:27.272234    4735 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:31:27.272251    4735 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:31:27.308558    4735 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:de306a65f7d728d77c3b068e74796a19 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1117 14:31:33.407053    4735 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 14:31:33.407186    4735 preload.go:255] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1117 14:31:34.211909    4735 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1117 14:31:34.212327    4735 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/ingress-addon-legacy-20211117143126-2140/config.json ...
	I1117 14:31:34.212399    4735 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/ingress-addon-legacy-20211117143126-2140/config.json: {Name:mk2948de8026f026018700d56a3d337c54687616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:31:34.217339    4735 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:31:34.217395    4735 start.go:313] acquiring machines lock for ingress-addon-legacy-20211117143126-2140: {Name:mkfe0fecfa893394a9257aec1e9b2fce98ac5296 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:31:34.217637    4735 start.go:317] acquired machines lock for "ingress-addon-legacy-20211117143126-2140" in 223.042µs
	I1117 14:31:34.217689    4735 start.go:89] Provisioning new machine with config: &{Name:ingress-addon-legacy-20211117143126-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20211117143126-2140 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ControlPlane:true Worker:true}
	I1117 14:31:34.217785    4735 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:31:34.255798    4735 out.go:203] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1117 14:31:34.256080    4735 start.go:160] libmachine.API.Create for "ingress-addon-legacy-20211117143126-2140" (driver="docker")
	I1117 14:31:34.256152    4735 client.go:168] LocalClient.Create starting
	I1117 14:31:34.256347    4735 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:31:34.256444    4735 main.go:130] libmachine: Decoding PEM data...
	I1117 14:31:34.256479    4735 main.go:130] libmachine: Parsing certificate...
	I1117 14:31:34.256585    4735 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:31:34.256647    4735 main.go:130] libmachine: Decoding PEM data...
	I1117 14:31:34.256664    4735 main.go:130] libmachine: Parsing certificate...
	I1117 14:31:34.285381    4735 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117143126-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:31:34.445072    4735 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117143126-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:31:34.445186    4735 network_create.go:254] running [docker network inspect ingress-addon-legacy-20211117143126-2140] to gather additional debugging logs...
	I1117 14:31:34.445206    4735 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117143126-2140
	W1117 14:31:34.600918    4735 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:31:34.600948    4735 network_create.go:257] error running [docker network inspect ingress-addon-legacy-20211117143126-2140]: docker network inspect ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:34.600967    4735 network_create.go:259] output of [docker network inspect ingress-addon-legacy-20211117143126-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20211117143126-2140
	
	** /stderr **
	I1117 14:31:34.601075    4735 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:31:34.787101    4735 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004b0ef0] misses:0}
	I1117 14:31:34.787135    4735 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:31:34.787150    4735 network_create.go:106] attempt to create docker network ingress-addon-legacy-20211117143126-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:31:34.787240    4735 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117143126-2140
	I1117 14:31:38.699405    4735 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117143126-2140: (3.912075213s)
	I1117 14:31:38.699437    4735 network_create.go:90] docker network ingress-addon-legacy-20211117143126-2140 192.168.49.0/24 created
	I1117 14:31:38.699489    4735 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20211117143126-2140" container
	I1117 14:31:38.699622    4735 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:31:38.807375    4735 cli_runner.go:115] Run: docker volume create ingress-addon-legacy-20211117143126-2140 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117143126-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:31:38.918008    4735 oci.go:102] Successfully created a docker volume ingress-addon-legacy-20211117143126-2140
	I1117 14:31:38.918138    4735 cli_runner.go:115] Run: docker run --rm --name ingress-addon-legacy-20211117143126-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117143126-2140 --entrypoint /usr/bin/test -v ingress-addon-legacy-20211117143126-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:31:39.417284    4735 oci.go:106] Successfully prepared a docker volume ingress-addon-legacy-20211117143126-2140
	E1117 14:31:39.417338    4735 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:31:39.417348    4735 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 14:31:39.417359    4735 client.go:171] LocalClient.Create took 5.161154959s
	I1117 14:31:39.417374    4735 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:31:39.417486    4735 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117143126-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:31:41.420972    4735 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:31:41.421066    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:31:41.556689    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:31:41.556773    4735 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:41.838423    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:31:41.969767    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:31:41.969882    4735 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:42.510405    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:31:42.633840    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:31:42.633969    4735 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:43.289946    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:31:43.410926    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	W1117 14:31:43.411015    4735 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	
	W1117 14:31:43.411038    4735 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:43.411060    4735 start.go:129] duration metric: createHost completed in 9.193180616s
	I1117 14:31:43.411072    4735 start.go:80] releasing machines lock for "ingress-addon-legacy-20211117143126-2140", held for 9.193332594s
	W1117 14:31:43.411091    4735 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:31:43.411603    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:43.540592    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:43.540647    4735 delete.go:82] Unable to get host status for ingress-addon-legacy-20211117143126-2140, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	W1117 14:31:43.540781    4735 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:31:43.540793    4735 start.go:547] Will try again in 5 seconds ...
	I1117 14:31:45.025455    4735 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117143126-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.607869749s)
	I1117 14:31:45.025471    4735 kic.go:188] duration metric: took 5.608034 seconds to extract preloaded images to volume
	I1117 14:31:48.551130    4735 start.go:313] acquiring machines lock for ingress-addon-legacy-20211117143126-2140: {Name:mkfe0fecfa893394a9257aec1e9b2fce98ac5296 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:31:48.551298    4735 start.go:317] acquired machines lock for "ingress-addon-legacy-20211117143126-2140" in 129.858µs
	I1117 14:31:48.551338    4735 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:31:48.551350    4735 fix.go:55] fixHost starting: 
	I1117 14:31:48.551847    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:48.663634    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:48.663686    4735 fix.go:108] recreateIfNeeded on ingress-addon-legacy-20211117143126-2140: state= err=unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:48.663699    4735 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:31:48.690907    4735 out.go:176] * docker "ingress-addon-legacy-20211117143126-2140" container is missing, will recreate.
	I1117 14:31:48.690952    4735 delete.go:124] DEMOLISHING ingress-addon-legacy-20211117143126-2140 ...
	I1117 14:31:48.691172    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:48.802881    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:31:48.802946    4735 stop.go:75] unable to get state: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:48.802963    4735 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:48.803367    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:48.910256    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:48.910300    4735 delete.go:82] Unable to get host status for ingress-addon-legacy-20211117143126-2140, assuming it has already been deleted: state: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:48.910390    4735 cli_runner.go:115] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20211117143126-2140
	W1117 14:31:49.017174    4735 cli_runner.go:162] docker container inspect -f {{.Id}} ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:31:49.017204    4735 kic.go:360] could not find the container ingress-addon-legacy-20211117143126-2140 to remove it. will try anyways
	I1117 14:31:49.017287    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:49.122650    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:31:49.122693    4735 oci.go:83] error getting container status, will try to delete anyways: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:49.122781    4735 cli_runner.go:115] Run: docker exec --privileged -t ingress-addon-legacy-20211117143126-2140 /bin/bash -c "sudo init 0"
	W1117 14:31:49.229336    4735 cli_runner.go:162] docker exec --privileged -t ingress-addon-legacy-20211117143126-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:31:49.229359    4735 oci.go:658] error shutdown ingress-addon-legacy-20211117143126-2140: docker exec --privileged -t ingress-addon-legacy-20211117143126-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:50.239759    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:50.353087    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:50.353127    4735 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:50.353137    4735 oci.go:672] temporary error: container ingress-addon-legacy-20211117143126-2140 status is  but expect it to be exited
	I1117 14:31:50.353157    4735 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:50.816080    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:50.928140    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:50.928187    4735 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:50.928197    4735 oci.go:672] temporary error: container ingress-addon-legacy-20211117143126-2140 status is  but expect it to be exited
	I1117 14:31:50.928224    4735 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:51.822580    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:51.936200    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:51.936241    4735 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:51.936249    4735 oci.go:672] temporary error: container ingress-addon-legacy-20211117143126-2140 status is  but expect it to be exited
	I1117 14:31:51.936272    4735 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:52.581580    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:52.695149    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:52.695190    4735 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:52.695203    4735 oci.go:672] temporary error: container ingress-addon-legacy-20211117143126-2140 status is  but expect it to be exited
	I1117 14:31:52.695225    4735 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:53.809221    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:53.919630    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:53.919669    4735 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:53.919677    4735 oci.go:672] temporary error: container ingress-addon-legacy-20211117143126-2140 status is  but expect it to be exited
	I1117 14:31:53.919698    4735 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:55.435967    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:55.545577    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:55.545617    4735 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:55.545626    4735 oci.go:672] temporary error: container ingress-addon-legacy-20211117143126-2140 status is  but expect it to be exited
	I1117 14:31:55.545648    4735 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:58.597045    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:31:58.707599    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:31:58.707646    4735 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:31:58.707655    4735 oci.go:672] temporary error: container ingress-addon-legacy-20211117143126-2140 status is  but expect it to be exited
	I1117 14:31:58.707678    4735 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:04.490423    4735 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:32:04.602213    4735 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:32:04.602261    4735 oci.go:670] temporary error verifying shutdown: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:04.602268    4735 oci.go:672] temporary error: container ingress-addon-legacy-20211117143126-2140 status is  but expect it to be exited
	I1117 14:32:04.602299    4735 oci.go:87] couldn't shut down ingress-addon-legacy-20211117143126-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	 
	I1117 14:32:04.602414    4735 cli_runner.go:115] Run: docker rm -f -v ingress-addon-legacy-20211117143126-2140
	I1117 14:32:04.710767    4735 cli_runner.go:115] Run: docker container inspect -f {{.Id}} ingress-addon-legacy-20211117143126-2140
	W1117 14:32:04.817063    4735 cli_runner.go:162] docker container inspect -f {{.Id}} ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:32:04.817191    4735 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117143126-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:32:04.925966    4735 cli_runner.go:115] Run: docker network rm ingress-addon-legacy-20211117143126-2140
	I1117 14:32:07.725013    4735 cli_runner.go:168] Completed: docker network rm ingress-addon-legacy-20211117143126-2140: (2.798952063s)
	W1117 14:32:07.725286    4735 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:32:07.725293    4735 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:32:08.728869    4735 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:32:08.756454    4735 out.go:203] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1117 14:32:08.756555    4735 start.go:160] libmachine.API.Create for "ingress-addon-legacy-20211117143126-2140" (driver="docker")
	I1117 14:32:08.756584    4735 client.go:168] LocalClient.Create starting
	I1117 14:32:08.756694    4735 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:32:08.756736    4735 main.go:130] libmachine: Decoding PEM data...
	I1117 14:32:08.756756    4735 main.go:130] libmachine: Parsing certificate...
	I1117 14:32:08.756809    4735 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:32:08.756836    4735 main.go:130] libmachine: Decoding PEM data...
	I1117 14:32:08.756845    4735 main.go:130] libmachine: Parsing certificate...
	I1117 14:32:08.757181    4735 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117143126-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:32:08.869040    4735 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117143126-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:32:08.869142    4735 network_create.go:254] running [docker network inspect ingress-addon-legacy-20211117143126-2140] to gather additional debugging logs...
	I1117 14:32:08.869155    4735 cli_runner.go:115] Run: docker network inspect ingress-addon-legacy-20211117143126-2140
	W1117 14:32:08.976134    4735 cli_runner.go:162] docker network inspect ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:32:08.976160    4735 network_create.go:257] error running [docker network inspect ingress-addon-legacy-20211117143126-2140]: docker network inspect ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:08.976180    4735 network_create.go:259] output of [docker network inspect ingress-addon-legacy-20211117143126-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20211117143126-2140
	
	** /stderr **
	I1117 14:32:08.976278    4735 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:32:09.082452    4735 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004b0ef0] amended:false}} dirty:map[] misses:0}
	I1117 14:32:09.082485    4735 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:32:09.082666    4735 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004b0ef0] amended:true}} dirty:map[192.168.49.0:0xc0004b0ef0 192.168.58.0:0xc000116808] misses:0}
	I1117 14:32:09.082679    4735 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:32:09.082687    4735 network_create.go:106] attempt to create docker network ingress-addon-legacy-20211117143126-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:32:09.082770    4735 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117143126-2140
	I1117 14:32:12.938291    4735 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true ingress-addon-legacy-20211117143126-2140: (3.855410537s)
	I1117 14:32:12.938313    4735 network_create.go:90] docker network ingress-addon-legacy-20211117143126-2140 192.168.58.0/24 created
	I1117 14:32:12.938324    4735 kic.go:106] calculated static IP "192.168.58.2" for the "ingress-addon-legacy-20211117143126-2140" container
	I1117 14:32:12.938439    4735 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:32:13.043740    4735 cli_runner.go:115] Run: docker volume create ingress-addon-legacy-20211117143126-2140 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117143126-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:32:13.150732    4735 oci.go:102] Successfully created a docker volume ingress-addon-legacy-20211117143126-2140
	I1117 14:32:13.150849    4735 cli_runner.go:115] Run: docker run --rm --name ingress-addon-legacy-20211117143126-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20211117143126-2140 --entrypoint /usr/bin/test -v ingress-addon-legacy-20211117143126-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:32:13.580899    4735 oci.go:106] Successfully prepared a docker volume ingress-addon-legacy-20211117143126-2140
	E1117 14:32:13.580956    4735 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:32:13.580961    4735 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1117 14:32:13.580966    4735 client.go:171] LocalClient.Create took 4.824292424s
	I1117 14:32:13.580980    4735 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:32:13.581081    4735 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20211117143126-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:32:15.581249    4735 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:32:15.581361    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:32:15.716795    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:32:15.716942    4735 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:15.896086    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:32:16.014866    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:32:16.015007    4735 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:16.345819    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:32:16.481105    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:32:16.481185    4735 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:16.941904    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:32:17.062853    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	W1117 14:32:17.062983    4735 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	
	W1117 14:32:17.063012    4735 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:17.063033    4735 start.go:129] duration metric: createHost completed in 8.333931332s
	I1117 14:32:17.063129    4735 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:32:17.063239    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:32:17.187336    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:32:17.187411    4735 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:17.384867    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:32:17.504293    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:32:17.504392    4735 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:17.802036    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:32:17.933980    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	I1117 14:32:17.934058    4735 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:18.597613    4735 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140
	W1117 14:32:18.703416    4735 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140 returned with exit code 1
	W1117 14:32:18.703503    4735 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	
	W1117 14:32:18.703528    4735 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "ingress-addon-legacy-20211117143126-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20211117143126-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	I1117 14:32:18.703542    4735 fix.go:57] fixHost completed within 30.151687647s
	I1117 14:32:18.703555    4735 start.go:80] releasing machines lock for "ingress-addon-legacy-20211117143126-2140", held for 30.151739982s
	W1117 14:32:18.703693    4735 out.go:241] * Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20211117143126-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p ingress-addon-legacy-20211117143126-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:32:18.757797    4735 out.go:176] 
	W1117 14:32:18.757913    4735 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:32:18.757923    4735 out.go:241] * 
	* 
	W1117 14:32:18.758497    4735 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:32:18.872932    4735 out.go:176] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:42: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211117143126-2140 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 80
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (52.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117143126-2140 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117143126-2140 addons enable ingress --alsologtostderr -v=5: exit status 10 (367.217846ms)

                                                
                                                
-- stdout --
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:32:18.938347    4962 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:32:18.938549    4962 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:32:18.938555    4962 out.go:310] Setting ErrFile to fd 2...
	I1117 14:32:18.938558    4962 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:32:18.938621    4962 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:32:18.938990    4962 config.go:176] Loaded profile config "ingress-addon-legacy-20211117143126-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1117 14:32:18.939005    4962 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20211117143126-2140"
	I1117 14:32:18.939012    4962 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20211117143126-2140"
	I1117 14:32:18.939236    4962 host.go:66] Checking if "ingress-addon-legacy-20211117143126-2140" exists ...
	I1117 14:32:18.939726    4962 cli_runner.go:115] Run: docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}
	W1117 14:32:19.045180    4962 cli_runner.go:162] docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:32:19.045237    4962 host.go:54] host status for "ingress-addon-legacy-20211117143126-2140" returned error: state: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140
	W1117 14:32:19.045254    4962 addons.go:202] "ingress-addon-legacy-20211117143126-2140" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I1117 14:32:19.045271    4962 addons.go:386] Verifying addon ingress=true in "ingress-addon-legacy-20211117143126-2140"
	I1117 14:32:19.162067    4962 out.go:176] * Verifying ingress addon...
	W1117 14:32:19.162236    4962 loader.go:221] Config not found: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:32:19.216064    4962 out.go:176] 
	W1117 14:32:19.216316    4962 out.go:241] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20211117143126-2140" does not exist: client config: context "ingress-addon-legacy-20211117143126-2140" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20211117143126-2140" does not exist: client config: context "ingress-addon-legacy-20211117143126-2140" does not exist]
	W1117 14:32:19.216348    4962 out.go:241] * 
	* 
	W1117 14:32:19.219472    4962 out.go:241] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:32:19.262372    4962 out.go:176] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:72: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20211117143126-2140
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20211117143126-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "ingress-addon-legacy-20211117143126-2140",
	        "Id": "6de84053cb431ec994f97c218abf22035dbaabf79cbcd1ab3ba3a7f1a1ffddd7",
	        "Created": "2021-11-17T22:32:09.186590484Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20211117143126-2140 -n ingress-addon-legacy-20211117143126-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20211117143126-2140 -n ingress-addon-legacy-20211117143126-2140: exit status 7 (149.408874ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:32:19.527513    4971 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20211117143126-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (0.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.26s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:157: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20211117143126-2140
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20211117143126-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "ingress-addon-legacy-20211117143126-2140",
	        "Id": "6de84053cb431ec994f97c218abf22035dbaabf79cbcd1ab3ba3a7f1a1ffddd7",
	        "Created": "2021-11-17T22:32:09.186590484Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20211117143126-2140 -n ingress-addon-legacy-20211117143126-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20211117143126-2140 -n ingress-addon-legacy-20211117143126-2140: exit status 7 (149.654682ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:32:19.988682    4985 status.go:247] status error: host: state: unknown state "ingress-addon-legacy-20211117143126-2140": docker container inspect ingress-addon-legacy-20211117143126-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: ingress-addon-legacy-20211117143126-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20211117143126-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.26s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20211117143224-2140 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-20211117143224-2140 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : exit status 80 (45.186380274s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c092736-9ee9-4b92-8519-7e73f17bdba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-20211117143224-2140] minikube v1.24.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"de78abaf-5125-4465-b8ca-b20f2aaca5b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"00e12b72-d723-48c5-b061-aca03b1876f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig"}}
	{"specversion":"1.0","id":"b9d546fd-9124-493d-8d6f-e9cb2e34047f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"2280caac-1be1-4881-96d1-e877f86ee1c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube"}}
	{"specversion":"1.0","id":"f4491785-6250-4ce7-a0d0-7c085eeb6dda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a3d802d-7cca-4f7c-8512-74b54c87ee09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-20211117143224-2140 in cluster json-output-20211117143224-2140","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"db2304ea-1247-4020-9633-1a13a8719694","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"40deb7f6-b5ce-465b-84cd-e949212a24d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9bf941de-aa90-4690-a976-43e6388fb611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"}}
	{"specversion":"1.0","id":"7e303766-cf12-4c9f-b6b9-48dda837c412","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"docker \"json-output-20211117143224-2140\" container is missing, will recreate.","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"50a300e0-c56c-4774-87b7-7f3a27fa3caa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0f9afe0-2682-4d8e-85e5-9ccbe05cba25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start docker container. Running \"minikube delete -p json-output-20211117143224-2140\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"}}
	{"specversion":"1.0","id":"1b3b3e43-75f2-46c1-930c-16ae72f13136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules","name":"GUEST_PROVISION","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:32:29.854492    5037 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	E1117 14:33:04.078396    5037 oci.go:197] error getting kernel modules path: Unable to locate kernel modules

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-amd64 start -p json-output-20211117143224-2140 --output=json --user=testUser --memory=2200 --wait=true --driver=docker ": exit status 80
--- FAIL: TestJSONOutput/start/Command (45.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 8 has already been assigned to another step:
Creating docker container (CPUs=2, Memory=2200MB) ...
Cannot use for:
docker "json-output-20211117143224-2140" container is missing, will recreate.
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7c092736-9ee9-4b92-8519-7e73f17bdba8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20211117143224-2140] minikube v1.24.0 on Darwin 11.2.3",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: de78abaf-5125-4465-b8ca-b20f2aaca5b5
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=12739"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 00e12b72-d723-48c5-b061-aca03b1876f2
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b9d546fd-9124-493d-8d6f-e9cb2e34047f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 2280caac-1be1-4881-96d1-e877f86ee1c1
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f4491785-6250-4ce7-a0d0-7c085eeb6dda
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3a3d802d-7cca-4f7c-8512-74b54c87ee09
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20211117143224-2140 in cluster json-output-20211117143224-2140",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: db2304ea-1247-4020-9633-1a13a8719694
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 40deb7f6-b5ce-465b-84cd-e949212a24d6
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 9bf941de-aa90-4690-a976-43e6388fb611
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7e303766-cf12-4c9f-b6b9-48dda837c412
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20211117143224-2140\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 50a300e0-c56c-4774-87b7-7f3a27fa3caa
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f0f9afe0-2682-4d8e-85e5-9ccbe05cba25
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20211117143224-2140\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 1b3b3e43-75f2-46c1-930c-16ae72f13136
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules",
"name": "GUEST_PROVISION",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7c092736-9ee9-4b92-8519-7e73f17bdba8
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-20211117143224-2140] minikube v1.24.0 on Darwin 11.2.3",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: de78abaf-5125-4465-b8ca-b20f2aaca5b5
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=12739"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 00e12b72-d723-48c5-b061-aca03b1876f2
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b9d546fd-9124-493d-8d6f-e9cb2e34047f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 2280caac-1be1-4881-96d1-e877f86ee1c1
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f4491785-6250-4ce7-a0d0-7c085eeb6dda
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3a3d802d-7cca-4f7c-8512-74b54c87ee09
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting control plane node json-output-20211117143224-2140 in cluster json-output-20211117143224-2140",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: db2304ea-1247-4020-9633-1a13a8719694
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 40deb7f6-b5ce-465b-84cd-e949212a24d6
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 9bf941de-aa90-4690-a976-43e6388fb611
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7e303766-cf12-4c9f-b6b9-48dda837c412
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "docker \"json-output-20211117143224-2140\" container is missing, will recreate.",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 50a300e0-c56c-4774-87b7-7f3a27fa3caa
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=2200MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f0f9afe0-2682-4d8e-85e5-9ccbe05cba25
datacontenttype: application/json
Data,
{
"message": "Failed to start docker container. Running \"minikube delete -p json-output-20211117143224-2140\" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 1b3b3e43-75f2-46c1-930c-16ae72f13136
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules",
"name": "GUEST_PROVISION",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.16s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20211117143224-2140 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p json-output-20211117143224-2140 --output=json --user=testUser: exit status 80 (162.477218ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bf8c7ef6-0930-4bad-9f71-772fd30a5417","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"state: unknown state \"json-output-20211117143224-2140\": docker container inspect json-output-20211117143224-2140 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117143224-2140","name":"GUEST_STATUS","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-amd64 pause -p json-output-20211117143224-2140 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (0.16s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20211117143224-2140 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-amd64 unpause -p json-output-20211117143224-2140 --output=json --user=testUser: exit status 80 (416.822116ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "json-output-20211117143224-2140": docker container inspect json-output-20211117143224-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: json-output-20211117143224-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-amd64 unpause -p json-output-20211117143224-2140 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (14.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20211117143224-2140 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p json-output-20211117143224-2140 --output=json --user=testUser: exit status 82 (14.736537811s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3466de94-6346-426a-b0ac-7a14b21c7b5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117143224-2140\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"a33c2483-5b13-4e4d-a5be-25908b703183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117143224-2140\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"88012594-4f36-4664-9035-0c94e59366ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117143224-2140\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"f2d1b3d9-6bc1-4a16-bd43-cde34e997515","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117143224-2140\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"a930e4b3-3a34-4b43-a2f4-cc0d3e660d5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117143224-2140\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"89bae7ab-736c-4674-9a24-59e28501e1c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Stopping node \"json-output-20211117143224-2140\"  ...","name":"Stopping","totalsteps":"2"}}
	{"specversion":"1.0","id":"0733e6fb-5cb2-414c-9c1d-c31ed7f2e267","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"82","issues":"","message":"docker container inspect json-output-20211117143224-2140 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117143224-2140","name":"GUEST_STOP_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-amd64 stop -p json-output-20211117143224-2140 --output=json --user=testUser": exit status 82
--- FAIL: TestJSONOutput/stop/Command (14.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
json_output_test.go:80: audit.json does not contain the user testUser
--- FAIL: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
json_output_test.go:114: step 0 has already been assigned to another step:
Stopping node "json-output-20211117143224-2140"  ...
Cannot use for:
Stopping node "json-output-20211117143224-2140"  ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3466de94-6346-426a-b0ac-7a14b21c7b5c
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a33c2483-5b13-4e4d-a5be-25908b703183
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 88012594-4f36-4664-9035-0c94e59366ca
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f2d1b3d9-6bc1-4a16-bd43-cde34e997515
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a930e4b3-3a34-4b43-a2f4-cc0d3e660d5e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 89bae7ab-736c-4674-9a24-59e28501e1c1
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 0733e6fb-5cb2-414c-9c1d-c31ed7f2e267
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20211117143224-2140 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117143224-2140",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
json_output_test.go:133: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3466de94-6346-426a-b0ac-7a14b21c7b5c
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a33c2483-5b13-4e4d-a5be-25908b703183
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 88012594-4f36-4664-9035-0c94e59366ca
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: f2d1b3d9-6bc1-4a16-bd43-cde34e997515
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a930e4b3-3a34-4b43-a2f4-cc0d3e660d5e
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 89bae7ab-736c-4674-9a24-59e28501e1c1
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "Stopping node \"json-output-20211117143224-2140\"  ...",
"name": "Stopping",
"totalsteps": "2"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 0733e6fb-5cb2-414c-9c1d-c31ed7f2e267
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "82",
"issues": "",
"message": "docker container inspect json-output-20211117143224-2140 --format=: exit status 1\nstdout:\n\n\nstderr:\nError: No such container: json-output-20211117143224-2140",
"name": "GUEST_STOP_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (95.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20211117143329-2140 --network=
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20211117143329-2140 --network=: (1m29.907162922s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:107: docker-network-20211117143329-2140 network is not listed by [[docker network ls --format {{.Name}}]]: 
-- stdout --
	bridge
	host
	none

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "docker-network-20211117143329-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20211117143329-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20211117143329-2140: (5.296980037s)
--- FAIL: TestKicCustomNetwork/create_custom_network (95.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (46.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20211117143749-2140 --memory=2048 --mount --driver=docker 
mount_start_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-20211117143749-2140 --memory=2048 --mount --driver=docker : exit status 80 (45.657033555s)

                                                
                                                
-- stdout --
	* [mount-start-1-20211117143749-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node mount-start-1-20211117143749-2140 in cluster mount-start-1-20211117143749-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-1-20211117143749-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:37:55.289735    6200 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:38:29.640409    6200 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-1-20211117143749-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:79: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-20211117143749-2140 --memory=2048 --mount --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20211117143749-2140
helpers_test.go:235: (dbg) docker inspect mount-start-1-20211117143749-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-1-20211117143749-2140",
	        "Id": "3127887e735a1c2657689380cb4610438e87682686a6366bf99032d5cd54a18d",
	        "Created": "2021-11-17T22:38:25.151397729Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-20211117143749-2140 -n mount-start-1-20211117143749-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-20211117143749-2140 -n mount-start-1-20211117143749-2140: exit status 7 (161.851169ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:38:35.516050    6441 status.go:247] status error: host: state: unknown state "mount-start-1-20211117143749-2140": docker container inspect mount-start-1-20211117143749-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117143749-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20211117143749-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountFirst (46.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (46.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20211117143749-2140 --memory=2048 --mount --driver=docker 
mount_start_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-2-20211117143749-2140 --memory=2048 --mount --driver=docker : exit status 80 (45.920197894s)

                                                
                                                
-- stdout --
	* [mount-start-2-20211117143749-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node mount-start-2-20211117143749-2140 in cluster mount-start-2-20211117143749-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-2-20211117143749-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:38:41.700454    6446 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:39:16.098092    6446 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-2-20211117143749-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:79: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-2-20211117143749-2140 --memory=2048 --mount --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117143749-2140
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117143749-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117143749-2140",
	        "Id": "29bc19889e75972ba59ba908137b2a365ac477a707a395dc3644017241e49e93",
	        "Created": "2021-11-17T22:39:11.639299242Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140: exit status 7 (207.236179ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:39:22.114866    6682 status.go:247] status error: host: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117143749-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/StartWithMountSecond (46.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.53s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20211117143749-2140 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-20211117143749-2140 ssh ls /minikube-host: exit status 80 (261.009382ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-1-20211117143749-2140": docker container inspect mount-start-1-20211117143749-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117143749-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_1bcea4236c355dc0a83fd7bb6da859e41ac1c109_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-20211117143749-2140 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-20211117143749-2140
helpers_test.go:235: (dbg) docker inspect mount-start-1-20211117143749-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-1-20211117143749-2140",
	        "Id": "3127887e735a1c2657689380cb4610438e87682686a6366bf99032d5cd54a18d",
	        "Created": "2021-11-17T22:38:25.151397729Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-20211117143749-2140 -n mount-start-1-20211117143749-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-20211117143749-2140 -n mount-start-1-20211117143749-2140: exit status 7 (154.927275ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:39:22.644477    6696 status.go:247] status error: host: state: unknown state "mount-start-1-20211117143749-2140": docker container inspect mount-start-1-20211117143749-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-1-20211117143749-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-20211117143749-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountFirst (0.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.47s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host: exit status 80 (204.852965ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_1bcea4236c355dc0a83fd7bb6da859e41ac1c109_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117143749-2140
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117143749-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117143749-2140",
	        "Id": "29bc19889e75972ba59ba908137b2a365ac477a707a395dc3644017241e49e93",
	        "Created": "2021-11-17T22:39:11.639299242Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140: exit status 7 (150.098247ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:39:23.113385    6710 status.go:247] status error: host: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117143749-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountSecond (0.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.48s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host: exit status 80 (208.831778ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_1bcea4236c355dc0a83fd7bb6da859e41ac1c109_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostDelete]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117143749-2140
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117143749-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T22:38:41Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "mount-start-2-20211117143749-2140"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/mount-start-2-20211117143749-2140/_data",
	        "Name": "mount-start-2-20211117143749-2140",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140: exit status 7 (155.116285ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:39:30.678881    6769 status.go:247] status error: host: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117143749-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountPostDelete (0.48s)

                                                
                                    
x
+
TestMountStart/serial/Stop (15.05s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20211117143749-2140
mount_start_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p mount-start-2-20211117143749-2140: exit status 82 (14.786182749s)

                                                
                                                
-- stdout --
	* Stopping node "mount-start-2-20211117143749-2140"  ...
	* Stopping node "mount-start-2-20211117143749-2140"  ...
	* Stopping node "mount-start-2-20211117143749-2140"  ...
	* Stopping node "mount-start-2-20211117143749-2140"  ...
	* Stopping node "mount-start-2-20211117143749-2140"  ...
	* Stopping node "mount-start-2-20211117143749-2140"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect mount-start-2-20211117143749-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:101: stop failed: "out/minikube-darwin-amd64 stop -p mount-start-2-20211117143749-2140" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117143749-2140
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117143749-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T22:38:41Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "mount-start-2-20211117143749-2140"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/mount-start-2-20211117143749-2140/_data",
	        "Name": "mount-start-2-20211117143749-2140",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140: exit status 7 (153.062772ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:39:45.733806    6811 status.go:247] status error: host: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117143749-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/Stop (15.05s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (67.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20211117143749-2140
mount_start_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-2-20211117143749-2140: exit status 80 (1m6.756605675s)

                                                
                                                
-- stdout --
	* [mount-start-2-20211117143749-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node mount-start-2-20211117143749-2140 in cluster mount-start-2-20211117143749-2140
	* Pulling base image ...
	* docker "mount-start-2-20211117143749-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "mount-start-2-20211117143749-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:40:10.188204    6816 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:40:46.643684    6816 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p mount-start-2-20211117143749-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:112: restart failed: "out/minikube-darwin-amd64 start -p mount-start-2-20211117143749-2140" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/RestartStopped]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117143749-2140
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117143749-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117143749-2140",
	        "Id": "47c69f79557698ec488e39734bc46e427dd91e773d5ff5875d0fa692938c92f8",
	        "Created": "2021-11-17T22:40:42.205138837Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140: exit status 7 (162.861603ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:40:53.025610    7137 status.go:247] status error: host: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117143749-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/RestartStopped (67.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host
mount_start_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host: exit status 80 (283.648266ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_1bcea4236c355dc0a83fd7bb6da859e41ac1c109_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:90: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-20211117143749-2140 ssh ls /minikube-host" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-20211117143749-2140
helpers_test.go:235: (dbg) docker inspect mount-start-2-20211117143749-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "mount-start-2-20211117143749-2140",
	        "Id": "47c69f79557698ec488e39734bc46e427dd91e773d5ff5875d0fa692938c92f8",
	        "Created": "2021-11-17T22:40:42.205138837Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-20211117143749-2140 -n mount-start-2-20211117143749-2140: exit status 7 (151.454607ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:40:53.577341    7151 status.go:247] status error: host: state: unknown state "mount-start-2-20211117143749-2140": docker container inspect mount-start-2-20211117143749-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: mount-start-2-20211117143749-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-2-20211117143749-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (46.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:82: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117144058-2140 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
multinode_test.go:82: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117144058-2140 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 80 (45.819227889s)

                                                
                                                
-- stdout --
	* [multinode-20211117144058-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117144058-2140 in cluster multinode-20211117144058-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117144058-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:40:58.374847    7216 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:40:58.374986    7216 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:40:58.374992    7216 out.go:310] Setting ErrFile to fd 2...
	I1117 14:40:58.374995    7216 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:40:58.375069    7216 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:40:58.375377    7216 out.go:304] Setting JSON to false
	I1117 14:40:58.400075    7216 start.go:112] hostinfo: {"hostname":"37310.local","uptime":2433,"bootTime":1637186425,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:40:58.400164    7216 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:40:58.426247    7216 out.go:176] * [multinode-20211117144058-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:40:58.426423    7216 notify.go:174] Checking for updates...
	I1117 14:40:58.473978    7216 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:40:58.499821    7216 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:40:58.525983    7216 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:40:58.551952    7216 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:40:58.552154    7216 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:40:58.649876    7216 docker.go:132] docker version: linux-20.10.6
	I1117 14:40:58.650001    7216 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:40:58.828993    7216 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:40:58.773165011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:40:58.877826    7216 out.go:176] * Using the docker driver based on user configuration
	I1117 14:40:58.877921    7216 start.go:280] selected driver: docker
	I1117 14:40:58.877929    7216 start.go:775] validating driver "docker" against <nil>
	I1117 14:40:58.877954    7216 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:40:58.881382    7216 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:40:59.061506    7216 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:40:59.004681913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:40:59.061584    7216 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:40:59.061708    7216 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 14:40:59.061725    7216 cni.go:93] Creating CNI manager for ""
	I1117 14:40:59.061731    7216 cni.go:154] 0 nodes found, recommending kindnet
	I1117 14:40:59.061740    7216 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 14:40:59.061746    7216 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1117 14:40:59.061750    7216 start_flags.go:277] Found "CNI" CNI - setting NetworkPlugin=cni
	I1117 14:40:59.061758    7216 start_flags.go:282] config:
	{Name:multinode-20211117144058-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117144058-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:40:59.089912    7216 out.go:176] * Starting control plane node multinode-20211117144058-2140 in cluster multinode-20211117144058-2140
	I1117 14:40:59.089967    7216 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:40:59.115483    7216 out.go:176] * Pulling base image ...
	I1117 14:40:59.115579    7216 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:40:59.115670    7216 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:40:59.115673    7216 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:40:59.115701    7216 cache.go:57] Caching tarball of preloaded images
	I1117 14:40:59.115914    7216 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:40:59.115941    7216 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:40:59.118268    7216 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/multinode-20211117144058-2140/config.json ...
	I1117 14:40:59.118334    7216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/multinode-20211117144058-2140/config.json: {Name:mke161de1f507d246115a5fd1421e1164196ef34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:40:59.233247    7216 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:40:59.233279    7216 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:40:59.233291    7216 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:40:59.233328    7216 start.go:313] acquiring machines lock for multinode-20211117144058-2140: {Name:mk8e725fd0df85c062d82279df9d95b56272d117 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:40:59.233466    7216 start.go:317] acquired machines lock for "multinode-20211117144058-2140" in 125.655µs
	I1117 14:40:59.233496    7216 start.go:89] Provisioning new machine with config: &{Name:multinode-20211117144058-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117144058-2140 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
	I1117 14:40:59.233548    7216 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:40:59.281384    7216 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:40:59.281690    7216 start.go:160] libmachine.API.Create for "multinode-20211117144058-2140" (driver="docker")
	I1117 14:40:59.281736    7216 client.go:168] LocalClient.Create starting
	I1117 14:40:59.281932    7216 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:40:59.282005    7216 main.go:130] libmachine: Decoding PEM data...
	I1117 14:40:59.282038    7216 main.go:130] libmachine: Parsing certificate...
	I1117 14:40:59.282150    7216 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:40:59.282206    7216 main.go:130] libmachine: Decoding PEM data...
	I1117 14:40:59.282223    7216 main.go:130] libmachine: Parsing certificate...
	I1117 14:40:59.283280    7216 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:40:59.396130    7216 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:40:59.396236    7216 network_create.go:254] running [docker network inspect multinode-20211117144058-2140] to gather additional debugging logs...
	I1117 14:40:59.396251    7216 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140
	W1117 14:40:59.509106    7216 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 returned with exit code 1
	I1117 14:40:59.509128    7216 network_create.go:257] error running [docker network inspect multinode-20211117144058-2140]: docker network inspect multinode-20211117144058-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117144058-2140
	I1117 14:40:59.509144    7216 network_create.go:259] output of [docker network inspect multinode-20211117144058-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117144058-2140
	
	** /stderr **
	I1117 14:40:59.509241    7216 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:40:59.623466    7216 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e0f8] misses:0}
	I1117 14:40:59.623505    7216 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:40:59.623524    7216 network_create.go:106] attempt to create docker network multinode-20211117144058-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:40:59.623602    7216 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140
	I1117 14:41:03.550561    7216 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140: (3.926824267s)
	I1117 14:41:03.550586    7216 network_create.go:90] docker network multinode-20211117144058-2140 192.168.49.0/24 created
	I1117 14:41:03.550602    7216 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117144058-2140" container
	I1117 14:41:03.550720    7216 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:41:03.660338    7216 cli_runner.go:115] Run: docker volume create multinode-20211117144058-2140 --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:41:03.770264    7216 oci.go:102] Successfully created a docker volume multinode-20211117144058-2140
	I1117 14:41:03.770424    7216 cli_runner.go:115] Run: docker run --rm --name multinode-20211117144058-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --entrypoint /usr/bin/test -v multinode-20211117144058-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:41:04.268214    7216 oci.go:106] Successfully prepared a docker volume multinode-20211117144058-2140
	E1117 14:41:04.268278    7216 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:41:04.268289    7216 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:41:04.268306    7216 client.go:171] LocalClient.Create took 4.986466037s
	I1117 14:41:04.268355    7216 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:41:04.268457    7216 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:41:06.275736    7216 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:41:06.275826    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:06.411840    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:06.411948    7216 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:06.688827    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:06.820540    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:06.820610    7216 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:07.361062    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:07.502538    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:07.502612    7216 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:08.158033    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:08.301992    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:41:08.302080    7216 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:41:08.302105    7216 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:08.302117    7216 start.go:129] duration metric: createHost completed in 9.068393111s
	I1117 14:41:08.302125    7216 start.go:80] releasing machines lock for "multinode-20211117144058-2140", held for 9.068478506s
	W1117 14:41:08.302142    7216 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:41:08.302817    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:08.450613    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:08.450676    7216 delete.go:82] Unable to get host status for multinode-20211117144058-2140, assuming it has already been deleted: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	W1117 14:41:08.450837    7216 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:41:08.450851    7216 start.go:547] Will try again in 5 seconds ...
	I1117 14:41:09.920619    7216 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (5.652012627s)
	I1117 14:41:09.920646    7216 kic.go:188] duration metric: took 5.652185 seconds to extract preloaded images to volume
	I1117 14:41:13.453998    7216 start.go:313] acquiring machines lock for multinode-20211117144058-2140: {Name:mk8e725fd0df85c062d82279df9d95b56272d117 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:41:13.454166    7216 start.go:317] acquired machines lock for "multinode-20211117144058-2140" in 138.187µs
	I1117 14:41:13.454210    7216 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:41:13.454221    7216 fix.go:55] fixHost starting: 
	I1117 14:41:13.454701    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:13.571833    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:13.571873    7216 fix.go:108] recreateIfNeeded on multinode-20211117144058-2140: state= err=unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:13.571889    7216 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:41:13.619378    7216 out.go:176] * docker "multinode-20211117144058-2140" container is missing, will recreate.
	I1117 14:41:13.619419    7216 delete.go:124] DEMOLISHING multinode-20211117144058-2140 ...
	I1117 14:41:13.619602    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:13.732497    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:41:13.732540    7216 stop.go:75] unable to get state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:13.732552    7216 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:13.732936    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:13.843263    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:13.843306    7216 delete.go:82] Unable to get host status for multinode-20211117144058-2140, assuming it has already been deleted: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:13.843390    7216 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:41:13.953460    7216 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:13.953487    7216 kic.go:360] could not find the container multinode-20211117144058-2140 to remove it. will try anyways
	I1117 14:41:13.953573    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:14.066075    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:41:14.066122    7216 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:14.066219    7216 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0"
	W1117 14:41:14.176076    7216 cli_runner.go:162] docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:41:14.176110    7216 oci.go:658] error shutdown multinode-20211117144058-2140: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:15.176401    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:15.292141    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:15.292184    7216 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:15.292193    7216 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:41:15.292213    7216 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:15.764624    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:15.882717    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:15.882757    7216 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:15.882766    7216 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:41:15.882788    7216 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:16.775750    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:16.892429    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:16.892478    7216 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:16.892490    7216 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:41:16.892512    7216 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:17.533790    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:17.647232    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:17.647282    7216 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:17.647293    7216 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:41:17.647318    7216 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:18.759166    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:18.874502    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:18.874541    7216 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:18.874548    7216 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:41:18.874571    7216 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:20.393175    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:20.511001    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:20.511042    7216 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:20.511051    7216 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:41:20.511072    7216 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:23.562535    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:23.682555    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:23.682595    7216 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:23.682603    7216 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:41:23.682624    7216 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:29.475086    7216 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:29.592903    7216 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:29.592943    7216 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:29.592951    7216 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:41:29.592976    7216 oci.go:87] couldn't shut down multinode-20211117144058-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	 
	I1117 14:41:29.593053    7216 cli_runner.go:115] Run: docker rm -f -v multinode-20211117144058-2140
	I1117 14:41:29.705293    7216 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:41:29.817289    7216 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:29.817398    7216 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:41:29.930976    7216 cli_runner.go:115] Run: docker network rm multinode-20211117144058-2140
	I1117 14:41:32.690013    7216 cli_runner.go:168] Completed: docker network rm multinode-20211117144058-2140: (2.75893799s)
	W1117 14:41:32.690274    7216 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:41:32.690281    7216 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:41:33.698145    7216 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:41:33.725680    7216 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:41:33.725859    7216 start.go:160] libmachine.API.Create for "multinode-20211117144058-2140" (driver="docker")
	I1117 14:41:33.725896    7216 client.go:168] LocalClient.Create starting
	I1117 14:41:33.726086    7216 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:41:33.726164    7216 main.go:130] libmachine: Decoding PEM data...
	I1117 14:41:33.726187    7216 main.go:130] libmachine: Parsing certificate...
	I1117 14:41:33.726297    7216 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:41:33.726351    7216 main.go:130] libmachine: Decoding PEM data...
	I1117 14:41:33.726368    7216 main.go:130] libmachine: Parsing certificate...
	I1117 14:41:33.727262    7216 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:41:33.841703    7216 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:41:33.841796    7216 network_create.go:254] running [docker network inspect multinode-20211117144058-2140] to gather additional debugging logs...
	I1117 14:41:33.841975    7216 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140
	W1117 14:41:33.954333    7216 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:33.954357    7216 network_create.go:257] error running [docker network inspect multinode-20211117144058-2140]: docker network inspect multinode-20211117144058-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117144058-2140
	I1117 14:41:33.954370    7216 network_create.go:259] output of [docker network inspect multinode-20211117144058-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117144058-2140
	
	** /stderr **
	I1117 14:41:33.954457    7216 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:41:34.065884    7216 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e0f8] amended:false}} dirty:map[] misses:0}
	I1117 14:41:34.065930    7216 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:41:34.066142    7216 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00000e0f8] amended:true}} dirty:map[192.168.49.0:0xc00000e0f8 192.168.58.0:0xc00012e590] misses:0}
	I1117 14:41:34.066156    7216 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:41:34.066163    7216 network_create.go:106] attempt to create docker network multinode-20211117144058-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:41:34.066247    7216 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140
	I1117 14:41:38.053302    7216 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140: (3.98693956s)
	I1117 14:41:38.053326    7216 network_create.go:90] docker network multinode-20211117144058-2140 192.168.58.0/24 created
	I1117 14:41:38.053337    7216 kic.go:106] calculated static IP "192.168.58.2" for the "multinode-20211117144058-2140" container
	I1117 14:41:38.053447    7216 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:41:38.167572    7216 cli_runner.go:115] Run: docker volume create multinode-20211117144058-2140 --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:41:38.280107    7216 oci.go:102] Successfully created a docker volume multinode-20211117144058-2140
	I1117 14:41:38.280249    7216 cli_runner.go:115] Run: docker run --rm --name multinode-20211117144058-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --entrypoint /usr/bin/test -v multinode-20211117144058-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:41:38.711535    7216 oci.go:106] Successfully prepared a docker volume multinode-20211117144058-2140
	E1117 14:41:38.711575    7216 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:41:38.711586    7216 client.go:171] LocalClient.Create took 4.985589468s
	I1117 14:41:38.711604    7216 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:41:38.711621    7216 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:41:38.711737    7216 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:41:40.711899    7216 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:41:40.712028    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:40.863913    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:40.864009    7216 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:41.045720    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:41.173986    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:41.174115    7216 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:41.505183    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:41.627171    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:41.627259    7216 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:42.091700    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:42.216028    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:41:42.216133    7216 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:41:42.216163    7216 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:42.216176    7216 start.go:129] duration metric: createHost completed in 8.517843264s
	I1117 14:41:42.216251    7216 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:41:42.216337    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:42.344546    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:42.344659    7216 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:42.540787    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:42.674874    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:42.674985    7216 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:42.976223    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:43.110135    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:41:43.110225    7216 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:43.776460    7216 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:41:43.902047    7216 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:41:43.902132    7216 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:41:43.902148    7216 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:41:43.902167    7216 fix.go:57] fixHost completed within 30.44736472s
	I1117 14:41:43.902177    7216 start.go:80] releasing machines lock for "multinode-20211117144058-2140", held for 30.4474224s
	W1117 14:41:43.902331    7216 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117144058-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117144058-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:41:44.021342    7216 out.go:176] 
	W1117 14:41:44.021526    7216 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:41:44.021569    7216 out.go:241] * 
	* 
	W1117 14:41:44.022875    7216 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:41:44.141268    7216 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:84: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-20211117144058-2140 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8a0cd05a565e9f892d41e270256c255970cf7bcdcdee2b7195fe98b43d53f414",
	        "Created": "2021-11-17T22:41:34.192651591Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (164.744042ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:44.765884    7450 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (46.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:463: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:463: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (71.192496ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-20211117144058-2140" does not exist

                                                
                                                
** /stderr **
multinode_test.go:465: failed to create busybox deployment to multinode cluster
multinode_test.go:468: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- rollout status deployment/busybox
multinode_test.go:468: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- rollout status deployment/busybox: exit status 1 (70.177159ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117144058-2140"

                                                
                                                
** /stderr **
multinode_test.go:470: failed to deploy busybox to multinode cluster
multinode_test.go:474: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (69.960121ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117144058-2140"

                                                
                                                
** /stderr **
multinode_test.go:476: failed to retrieve Pod IPs
multinode_test.go:480: expected 2 Pod IPs but got 1
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (71.35923ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117144058-2140"

                                                
                                                
** /stderr **
multinode_test.go:488: failed get Pod names
multinode_test.go:494: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- exec  -- nslookup kubernetes.io
multinode_test.go:494: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- exec  -- nslookup kubernetes.io: exit status 1 (72.108606ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117144058-2140"

                                                
                                                
** /stderr **
multinode_test.go:496: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:504: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- exec  -- nslookup kubernetes.default
multinode_test.go:504: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- exec  -- nslookup kubernetes.default: exit status 1 (70.902317ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117144058-2140"

                                                
                                                
** /stderr **
multinode_test.go:506: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:512: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:512: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (70.03517ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117144058-2140"

                                                
                                                
** /stderr **
multinode_test.go:514: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8a0cd05a565e9f892d41e270256c255970cf7bcdcdee2b7195fe98b43d53f414",
	        "Created": "2021-11-17T22:41:34.192651591Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (149.901695ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:45.526825    7473 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:522: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:522: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-20211117144058-2140 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (69.432193ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-20211117144058-2140"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8a0cd05a565e9f892d41e270256c255970cf7bcdcdee2b7195fe98b43d53f414",
	        "Created": "2021-11-17T22:41:34.192651591Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (153.034337ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:45.864343    7484 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20211117144058-2140 -v 3 --alsologtostderr
multinode_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20211117144058-2140 -v 3 --alsologtostderr: exit status 80 (212.311892ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:41:45.904099    7489 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:41:45.904291    7489 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:41:45.904297    7489 out.go:310] Setting ErrFile to fd 2...
	I1117 14:41:45.904301    7489 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:41:45.904388    7489 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:41:45.904571    7489 mustload.go:65] Loading cluster: multinode-20211117144058-2140
	I1117 14:41:45.904789    7489 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:41:45.905145    7489 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:46.013124    7489 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:46.051010    7489 out.go:176] 
	W1117 14:41:46.051192    7489 out.go:241] X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:41:46.051208    7489 out.go:241] * 
	* 
	W1117 14:41:46.055133    7489 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:41:46.076103    7489 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:109: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-20211117144058-2140 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8a0cd05a565e9f892d41e270256c255970cf7bcdcdee2b7195fe98b43d53f414",
	        "Created": "2021-11-17T22:41:34.192651591Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (148.41215ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:46.344524    7498 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:152: expected profile "multinode-20211117144058-2140" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-20211117144058-2140\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-20211117144058-2140\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFS
Share\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.22.3\",\"ClusterName\":\"multinode-20211117144058-2140\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"ExtraOptions\":[{\"Component\":\"kubelet\",\"Key\":\"cni-conf-dir\",\"Value\":\"/etc/cni/net.mk\"}],\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.22.3\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\"}}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8a0cd05a565e9f892d41e270256c255970cf7bcdcdee2b7195fe98b43d53f414",
	        "Created": "2021-11-17T22:41:34.192651591Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (153.282464ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:46.928247    7516 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --output json --alsologtostderr
multinode_test.go:170: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --output json --alsologtostderr: exit status 7 (180.144001ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-20211117144058-2140","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:41:46.967949    7521 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:41:46.971812    7521 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:41:46.971826    7521 out.go:310] Setting ErrFile to fd 2...
	I1117 14:41:46.971837    7521 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:41:46.972011    7521 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:41:46.972407    7521 out.go:304] Setting JSON to true
	I1117 14:41:46.972445    7521 mustload.go:65] Loading cluster: multinode-20211117144058-2140
	I1117 14:41:46.993895    7521 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:41:46.993929    7521 status.go:253] checking status of multinode-20211117144058-2140 ...
	I1117 14:41:46.994649    7521 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:47.108519    7521 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:47.108588    7521 status.go:328] multinode-20211117144058-2140 host status = "" (err=state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	)
	I1117 14:41:47.108602    7521 status.go:255] multinode-20211117144058-2140 status: &{Name:multinode-20211117144058-2140 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 14:41:47.108622    7521 status.go:258] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	E1117 14:41:47.108625    7521 status.go:261] The "multinode-20211117144058-2140" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:177: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8a0cd05a565e9f892d41e270256c255970cf7bcdcdee2b7195fe98b43d53f414",
	        "Created": "2021-11-17T22:41:34.192651591Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (152.119535ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:47.372976    7530 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node stop m03
multinode_test.go:192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node stop m03: exit status 85 (93.053761ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:194: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node stop m03": exit status 85
multinode_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status
multinode_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status: exit status 7 (153.76968ms)

                                                
                                                
-- stdout --
	multinode-20211117144058-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:47.620135    7536 status.go:258] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	E1117 14:41:47.620141    7536 status.go:261] The "multinode-20211117144058-2140" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr
multinode_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr: exit status 7 (150.868119ms)

                                                
                                                
-- stdout --
	multinode-20211117144058-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:41:47.660119    7541 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:41:47.660291    7541 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:41:47.660296    7541 out.go:310] Setting ErrFile to fd 2...
	I1117 14:41:47.660303    7541 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:41:47.660378    7541 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:41:47.660537    7541 out.go:304] Setting JSON to false
	I1117 14:41:47.660551    7541 mustload.go:65] Loading cluster: multinode-20211117144058-2140
	I1117 14:41:47.660768    7541 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:41:47.660780    7541 status.go:253] checking status of multinode-20211117144058-2140 ...
	I1117 14:41:47.661111    7541 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:41:47.771004    7541 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:41:47.771072    7541 status.go:328] multinode-20211117144058-2140 host status = "" (err=state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	)
	I1117 14:41:47.771092    7541 status.go:255] multinode-20211117144058-2140 status: &{Name:multinode-20211117144058-2140 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 14:41:47.771120    7541 status.go:258] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	E1117 14:41:47.771123    7541 status.go:261] The "multinode-20211117144058-2140" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:211: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr": multinode-20211117144058-2140
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:215: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr": multinode-20211117144058-2140
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:219: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr": multinode-20211117144058-2140
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8a0cd05a565e9f892d41e270256c255970cf7bcdcdee2b7195fe98b43d53f414",
	        "Created": "2021-11-17T22:41:34.192651591Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (150.896166ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:48.038534    7550 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:226: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node start m03 --alsologtostderr
multinode_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node start m03 --alsologtostderr: exit status 85 (93.528234ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:41:48.172356    7558 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:41:48.172577    7558 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:41:48.172583    7558 out.go:310] Setting ErrFile to fd 2...
	I1117 14:41:48.172587    7558 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:41:48.172665    7558 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:41:48.172876    7558 mustload.go:65] Loading cluster: multinode-20211117144058-2140
	I1117 14:41:48.173091    7558 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:41:48.200017    7558 out.go:176] 
	W1117 14:41:48.200204    7558 out.go:241] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
	W1117 14:41:48.200220    7558 out.go:241] * 
	* 
	W1117 14:41:48.203531    7558 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:41:48.224809    7558 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:238: I1117 14:41:48.172356    7558 out.go:297] Setting OutFile to fd 1 ...
I1117 14:41:48.172577    7558 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 14:41:48.172583    7558 out.go:310] Setting ErrFile to fd 2...
I1117 14:41:48.172587    7558 out.go:344] TERM=,COLORTERM=, which probably does not support color
I1117 14:41:48.172665    7558 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
I1117 14:41:48.172876    7558 mustload.go:65] Loading cluster: multinode-20211117144058-2140
I1117 14:41:48.173091    7558 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
I1117 14:41:48.200017    7558 out.go:176] 
W1117 14:41:48.200204    7558 out.go:241] X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: Could not find node m03
W1117 14:41:48.200220    7558 out.go:241] * 
* 
W1117 14:41:48.203531    7558 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1117 14:41:48.224809    7558 out.go:176] 
multinode_test.go:239: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node start m03 --alsologtostderr": exit status 85
multinode_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status
multinode_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status: exit status 7 (151.652817ms)

                                                
                                                
-- stdout --
	multinode-20211117144058-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:48.377353    7559 status.go:258] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	E1117 14:41:48.377361    7559 status.go:261] The "multinode-20211117144058-2140" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:245: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8a0cd05a565e9f892d41e270256c255970cf7bcdcdee2b7195fe98b43d53f414",
	        "Created": "2021-11-17T22:41:34.192651591Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (150.216559ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:41:48.643681    7568 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:265: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117144058-2140
multinode_test.go:272: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20211117144058-2140
multinode_test.go:272: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-20211117144058-2140: exit status 82 (14.766081777s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20211117144058-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:274: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-20211117144058-2140" : exit status 82
multinode_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117144058-2140 --wait=true -v=8 --alsologtostderr
multinode_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117144058-2140 --wait=true -v=8 --alsologtostderr: exit status 80 (1m9.663654078s)

                                                
                                                
-- stdout --
	* [multinode-20211117144058-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20211117144058-2140 in cluster multinode-20211117144058-2140
	* Pulling base image ...
	* docker "multinode-20211117144058-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117144058-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:42:03.490578    7599 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:42:03.490704    7599 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:42:03.490709    7599 out.go:310] Setting ErrFile to fd 2...
	I1117 14:42:03.490713    7599 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:42:03.490791    7599 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:42:03.491033    7599 out.go:304] Setting JSON to false
	I1117 14:42:03.515212    7599 start.go:112] hostinfo: {"hostname":"37310.local","uptime":2498,"bootTime":1637186425,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:42:03.515314    7599 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:42:03.542531    7599 out.go:176] * [multinode-20211117144058-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:42:03.542792    7599 notify.go:174] Checking for updates...
	I1117 14:42:03.590226    7599 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:42:03.616177    7599 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:42:03.641809    7599 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:42:03.668021    7599 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:42:03.668673    7599 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:42:03.668742    7599 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:42:03.767957    7599 docker.go:132] docker version: linux-20.10.6
	I1117 14:42:03.768104    7599 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:42:03.946929    7599 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:42:03.893150446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:42:03.995484    7599 out.go:176] * Using the docker driver based on existing profile
	I1117 14:42:03.995574    7599 start.go:280] selected driver: docker
	I1117 14:42:03.995597    7599 start.go:775] validating driver "docker" against &{Name:multinode-20211117144058-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117144058-2140 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:42:03.995712    7599 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:42:03.996082    7599 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:42:04.175396    7599 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:42:04.12291432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:42:04.177415    7599 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 14:42:04.177442    7599 cni.go:93] Creating CNI manager for ""
	I1117 14:42:04.177448    7599 cni.go:154] 1 nodes found, recommending kindnet
	I1117 14:42:04.177465    7599 start_flags.go:282] config:
	{Name:multinode-20211117144058-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117144058-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:42:04.226204    7599 out.go:176] * Starting control plane node multinode-20211117144058-2140 in cluster multinode-20211117144058-2140
	I1117 14:42:04.226292    7599 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:42:04.251843    7599 out.go:176] * Pulling base image ...
	I1117 14:42:04.251891    7599 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:42:04.251940    7599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:42:04.251941    7599 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:42:04.251957    7599 cache.go:57] Caching tarball of preloaded images
	I1117 14:42:04.252113    7599 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:42:04.252130    7599 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:42:04.252748    7599 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/multinode-20211117144058-2140/config.json ...
	I1117 14:42:04.376053    7599 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:42:04.376065    7599 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:42:04.376077    7599 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:42:04.376116    7599 start.go:313] acquiring machines lock for multinode-20211117144058-2140: {Name:mk8e725fd0df85c062d82279df9d95b56272d117 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:42:04.376194    7599 start.go:317] acquired machines lock for "multinode-20211117144058-2140" in 61.227µs
	I1117 14:42:04.376217    7599 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:42:04.376224    7599 fix.go:55] fixHost starting: 
	I1117 14:42:04.376463    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:04.486951    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:04.487034    7599 fix.go:108] recreateIfNeeded on multinode-20211117144058-2140: state= err=unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:04.487060    7599 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:42:04.514080    7599 out.go:176] * docker "multinode-20211117144058-2140" container is missing, will recreate.
	I1117 14:42:04.514181    7599 delete.go:124] DEMOLISHING multinode-20211117144058-2140 ...
	I1117 14:42:04.514411    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:04.625392    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:42:04.625429    7599 stop.go:75] unable to get state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:04.625444    7599 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:04.625834    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:04.736360    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:04.736412    7599 delete.go:82] Unable to get host status for multinode-20211117144058-2140, assuming it has already been deleted: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:04.736520    7599 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:42:04.844887    7599 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:04.844918    7599 kic.go:360] could not find the container multinode-20211117144058-2140 to remove it. will try anyways
	I1117 14:42:04.844996    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:04.958488    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:42:04.958527    7599 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:04.958611    7599 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0"
	W1117 14:42:05.068045    7599 cli_runner.go:162] docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:42:05.068078    7599 oci.go:658] error shutdown multinode-20211117144058-2140: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:06.078453    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:06.193339    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:06.193380    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:06.193395    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:06.193423    7599 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:06.751286    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:06.868479    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:06.868519    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:06.868527    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:06.868547    7599 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:07.955531    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:08.073431    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:08.073469    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:08.073476    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:08.073496    7599 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:09.387406    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:09.500789    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:09.500828    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:09.500835    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:09.500856    7599 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:11.090253    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:11.206642    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:11.206687    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:11.206703    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:11.206724    7599 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:13.554509    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:13.670563    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:13.670601    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:13.670607    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:13.670627    7599 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:18.182175    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:18.297151    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:18.297189    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:18.297196    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:18.297217    7599 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:21.529137    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:21.645374    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:21.645413    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:21.645429    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:21.645453    7599 oci.go:87] couldn't shut down multinode-20211117144058-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	 
	I1117 14:42:21.645527    7599 cli_runner.go:115] Run: docker rm -f -v multinode-20211117144058-2140
	I1117 14:42:21.759047    7599 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:42:21.871992    7599 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:21.872133    7599 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:42:21.990579    7599 cli_runner.go:115] Run: docker network rm multinode-20211117144058-2140
	I1117 14:42:24.742571    7599 cli_runner.go:168] Completed: docker network rm multinode-20211117144058-2140: (2.751896211s)
	W1117 14:42:24.742840    7599 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:42:24.742846    7599 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:42:25.745275    7599 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:42:25.772549    7599 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:42:25.772719    7599 start.go:160] libmachine.API.Create for "multinode-20211117144058-2140" (driver="docker")
	I1117 14:42:25.772752    7599 client.go:168] LocalClient.Create starting
	I1117 14:42:25.772964    7599 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:42:25.773042    7599 main.go:130] libmachine: Decoding PEM data...
	I1117 14:42:25.773071    7599 main.go:130] libmachine: Parsing certificate...
	I1117 14:42:25.773185    7599 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:42:25.773241    7599 main.go:130] libmachine: Decoding PEM data...
	I1117 14:42:25.773268    7599 main.go:130] libmachine: Parsing certificate...
	I1117 14:42:25.774197    7599 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:42:25.888927    7599 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:42:25.889071    7599 network_create.go:254] running [docker network inspect multinode-20211117144058-2140] to gather additional debugging logs...
	I1117 14:42:25.889090    7599 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140
	W1117 14:42:25.999464    7599 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:25.999487    7599 network_create.go:257] error running [docker network inspect multinode-20211117144058-2140]: docker network inspect multinode-20211117144058-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117144058-2140
	I1117 14:42:25.999500    7599 network_create.go:259] output of [docker network inspect multinode-20211117144058-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117144058-2140
	
	** /stderr **
	I1117 14:42:25.999587    7599 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:42:26.110430    7599 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0004aa7d8] misses:0}
	I1117 14:42:26.110465    7599 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:42:26.110481    7599 network_create.go:106] attempt to create docker network multinode-20211117144058-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:42:26.110558    7599 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140
	I1117 14:42:30.051318    7599 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140: (3.940641419s)
	I1117 14:42:30.051343    7599 network_create.go:90] docker network multinode-20211117144058-2140 192.168.49.0/24 created
	I1117 14:42:30.051361    7599 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117144058-2140" container
	I1117 14:42:30.051463    7599 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:42:30.164170    7599 cli_runner.go:115] Run: docker volume create multinode-20211117144058-2140 --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:42:30.275728    7599 oci.go:102] Successfully created a docker volume multinode-20211117144058-2140
	I1117 14:42:30.275868    7599 cli_runner.go:115] Run: docker run --rm --name multinode-20211117144058-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --entrypoint /usr/bin/test -v multinode-20211117144058-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:42:30.692880    7599 oci.go:106] Successfully prepared a docker volume multinode-20211117144058-2140
	E1117 14:42:30.692926    7599 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:42:30.692943    7599 client.go:171] LocalClient.Create took 4.920090288s
	I1117 14:42:30.692949    7599 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:42:30.692968    7599 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:42:30.693079    7599 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:42:32.693282    7599 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:42:32.693366    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:42:32.848082    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:32.848158    7599 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:33.000950    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:42:33.129796    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:33.129879    7599 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:33.430458    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:42:33.554263    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:33.554344    7599 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:34.129637    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:42:34.253294    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:42:34.253384    7599 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:42:34.253402    7599 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:34.253416    7599 start.go:129] duration metric: createHost completed in 8.507903938s
	I1117 14:42:34.253479    7599 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:42:34.253540    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:42:34.378242    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:34.378350    7599 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:34.557142    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:42:34.701412    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:34.701495    7599 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:35.040370    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:42:35.170609    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:35.170688    7599 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:35.640435    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:42:35.767290    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:42:35.767387    7599 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:42:35.767408    7599 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:35.767417    7599 fix.go:57] fixHost completed within 31.390598562s
	I1117 14:42:35.767425    7599 start.go:80] releasing machines lock for "multinode-20211117144058-2140", held for 31.390627491s
	W1117 14:42:35.767442    7599 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:42:35.767567    7599 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:42:35.767573    7599 start.go:547] Will try again in 5 seconds ...
	I1117 14:42:36.807334    7599 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.114098957s)
	I1117 14:42:36.807358    7599 kic.go:188] duration metric: took 6.114274 seconds to extract preloaded images to volume
	I1117 14:42:40.777594    7599 start.go:313] acquiring machines lock for multinode-20211117144058-2140: {Name:mk8e725fd0df85c062d82279df9d95b56272d117 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:42:40.777763    7599 start.go:317] acquired machines lock for "multinode-20211117144058-2140" in 136.103µs
	I1117 14:42:40.777803    7599 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:42:40.777810    7599 fix.go:55] fixHost starting: 
	I1117 14:42:40.778273    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:40.895632    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:40.895676    7599 fix.go:108] recreateIfNeeded on multinode-20211117144058-2140: state= err=unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:40.895691    7599 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:42:40.923266    7599 out.go:176] * docker "multinode-20211117144058-2140" container is missing, will recreate.
	I1117 14:42:40.923367    7599 delete.go:124] DEMOLISHING multinode-20211117144058-2140 ...
	I1117 14:42:40.923554    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:41.036665    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:42:41.036702    7599 stop.go:75] unable to get state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:41.036719    7599 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:41.037106    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:41.148719    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:41.148760    7599 delete.go:82] Unable to get host status for multinode-20211117144058-2140, assuming it has already been deleted: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:41.148870    7599 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:42:41.256839    7599 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:41.256867    7599 kic.go:360] could not find the container multinode-20211117144058-2140 to remove it. will try anyways
	I1117 14:42:41.256967    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:41.365510    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:42:41.365554    7599 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:41.365661    7599 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0"
	W1117 14:42:41.476446    7599 cli_runner.go:162] docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:42:41.476473    7599 oci.go:658] error shutdown multinode-20211117144058-2140: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:42.484054    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:42.595722    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:42.595764    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:42.595775    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:42.595793    7599 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:42.995380    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:43.107869    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:43.107911    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:43.107926    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:43.107945    7599 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:43.703128    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:43.816417    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:43.816469    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:43.816481    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:43.816503    7599 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:45.146521    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:45.264043    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:45.264083    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:45.264091    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:45.264111    7599 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:46.477430    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:46.589684    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:46.589725    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:46.589735    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:46.589755    7599 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:48.370832    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:48.500146    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:48.500186    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:48.500195    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:48.500215    7599 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:51.774681    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:51.889346    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:51.889385    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:51.889394    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:51.889415    7599 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:57.997689    7599 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:42:58.108036    7599 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:42:58.108077    7599 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:42:58.108086    7599 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:42:58.108110    7599 oci.go:87] couldn't shut down multinode-20211117144058-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	 
	I1117 14:42:58.108198    7599 cli_runner.go:115] Run: docker rm -f -v multinode-20211117144058-2140
	I1117 14:42:58.222404    7599 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:42:58.332093    7599 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:42:58.332207    7599 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:42:58.443035    7599 cli_runner.go:115] Run: docker network rm multinode-20211117144058-2140
	I1117 14:43:01.137316    7599 cli_runner.go:168] Completed: docker network rm multinode-20211117144058-2140: (2.694163032s)
	W1117 14:43:01.137566    7599 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:43:01.137574    7599 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:43:02.145706    7599 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:43:02.172766    7599 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:43:02.172918    7599 start.go:160] libmachine.API.Create for "multinode-20211117144058-2140" (driver="docker")
	I1117 14:43:02.172951    7599 client.go:168] LocalClient.Create starting
	I1117 14:43:02.173159    7599 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:43:02.173246    7599 main.go:130] libmachine: Decoding PEM data...
	I1117 14:43:02.173275    7599 main.go:130] libmachine: Parsing certificate...
	I1117 14:43:02.173369    7599 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:43:02.173426    7599 main.go:130] libmachine: Decoding PEM data...
	I1117 14:43:02.173441    7599 main.go:130] libmachine: Parsing certificate...
	I1117 14:43:02.195638    7599 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:43:02.311238    7599 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:43:02.311339    7599 network_create.go:254] running [docker network inspect multinode-20211117144058-2140] to gather additional debugging logs...
	I1117 14:43:02.311358    7599 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140
	W1117 14:43:02.422437    7599 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:02.422460    7599 network_create.go:257] error running [docker network inspect multinode-20211117144058-2140]: docker network inspect multinode-20211117144058-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117144058-2140
	I1117 14:43:02.422480    7599 network_create.go:259] output of [docker network inspect multinode-20211117144058-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117144058-2140
	
	** /stderr **
	I1117 14:43:02.422566    7599 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:43:02.534221    7599 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004aa7d8] amended:false}} dirty:map[] misses:0}
	I1117 14:43:02.534255    7599 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:43:02.534444    7599 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0004aa7d8] amended:true}} dirty:map[192.168.49.0:0xc0004aa7d8 192.168.58.0:0xc00000e350] misses:0}
	I1117 14:43:02.534456    7599 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:43:02.534463    7599 network_create.go:106] attempt to create docker network multinode-20211117144058-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:43:02.534539    7599 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140
	I1117 14:43:06.449552    7599 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140: (3.914871256s)
	I1117 14:43:06.449579    7599 network_create.go:90] docker network multinode-20211117144058-2140 192.168.58.0/24 created
	I1117 14:43:06.449592    7599 kic.go:106] calculated static IP "192.168.58.2" for the "multinode-20211117144058-2140" container
	I1117 14:43:06.449709    7599 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:43:06.558639    7599 cli_runner.go:115] Run: docker volume create multinode-20211117144058-2140 --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:43:06.671377    7599 oci.go:102] Successfully created a docker volume multinode-20211117144058-2140
	I1117 14:43:06.671497    7599 cli_runner.go:115] Run: docker run --rm --name multinode-20211117144058-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --entrypoint /usr/bin/test -v multinode-20211117144058-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:43:07.151675    7599 oci.go:106] Successfully prepared a docker volume multinode-20211117144058-2140
	E1117 14:43:07.151737    7599 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:43:07.151750    7599 client.go:171] LocalClient.Create took 4.97869454s
	I1117 14:43:07.151757    7599 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:43:07.151775    7599 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:43:07.151946    7599 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:43:09.152496    7599 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:43:09.152603    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:09.275733    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:09.275812    7599 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:09.474296    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:09.617519    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:09.617625    7599 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:09.918685    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:10.039807    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:10.039911    7599 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:10.750352    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:10.873966    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:43:10.874060    7599 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:43:10.874089    7599 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:10.874103    7599 start.go:129] duration metric: createHost completed in 8.728184101s
	I1117 14:43:10.874171    7599 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:43:10.874240    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:10.996844    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:10.996969    7599 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:11.342910    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:11.455719    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:11.455813    7599 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:11.904793    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:12.043709    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:12.043791    7599 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:12.628805    7599 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:12.762893    7599 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:43:12.762986    7599 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:43:12.763008    7599 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:12.763015    7599 fix.go:57] fixHost completed within 31.984599594s
	I1117 14:43:12.763023    7599 start.go:80] releasing machines lock for "multinode-20211117144058-2140", held for 31.984642698s
	W1117 14:43:12.763178    7599 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117144058-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117144058-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:43:12.904474    7599 out.go:176] 
	W1117 14:43:12.904602    7599 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:43:12.904613    7599 out.go:241] * 
	* 
	W1117 14:43:12.905166    7599 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:43:13.035890    7599 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:279: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-20211117144058-2140" : exit status 80
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117144058-2140
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8f4b9cd9f88aa4112e7a7ac0655d4e7d800d39daf1ff804291325a59455aab64",
	        "Created": "2021-11-17T22:43:02.66107297Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (161.803173ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:43:13.544412    7923 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (84.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node delete m03
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node delete m03: exit status 80 (289.204724ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:378: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 node delete m03": exit status 80
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr: exit status 7 (150.771361ms)

                                                
                                                
-- stdout --
	multinode-20211117144058-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:43:13.874167    7933 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:43:13.874285    7933 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:43:13.874290    7933 out.go:310] Setting ErrFile to fd 2...
	I1117 14:43:13.874293    7933 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:43:13.874365    7933 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:43:13.874535    7933 out.go:304] Setting JSON to false
	I1117 14:43:13.874549    7933 mustload.go:65] Loading cluster: multinode-20211117144058-2140
	I1117 14:43:13.874777    7933 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:43:13.874789    7933 status.go:253] checking status of multinode-20211117144058-2140 ...
	I1117 14:43:13.875129    7933 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:13.984587    7933 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:13.984645    7933 status.go:328] multinode-20211117144058-2140 host status = "" (err=state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	)
	I1117 14:43:13.984667    7933 status.go:255] multinode-20211117144058-2140 status: &{Name:multinode-20211117144058-2140 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 14:43:13.984685    7933 status.go:258] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	E1117 14:43:13.984688    7933 status.go:261] The "multinode-20211117144058-2140" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:384: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8f4b9cd9f88aa4112e7a7ac0655d4e7d800d39daf1ff804291325a59455aab64",
	        "Created": "2021-11-17T22:43:02.66107297Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (154.066395ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:43:14.252784    7942 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 stop
multinode_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 stop: exit status 82 (14.801453823s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	* Stopping node "multinode-20211117144058-2140"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect multinode-20211117144058-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:298: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 stop": exit status 82
multinode_test.go:302: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status
multinode_test.go:302: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status: exit status 7 (149.798036ms)

                                                
                                                
-- stdout --
	multinode-20211117144058-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:43:29.204655    7974 status.go:258] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	E1117 14:43:29.204663    7974 status.go:261] The "multinode-20211117144058-2140" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:309: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr
multinode_test.go:309: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr: exit status 7 (152.764384ms)

                                                
                                                
-- stdout --
	multinode-20211117144058-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:43:29.245144    7979 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:43:29.245269    7979 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:43:29.245275    7979 out.go:310] Setting ErrFile to fd 2...
	I1117 14:43:29.245278    7979 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:43:29.245352    7979 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:43:29.245525    7979 out.go:304] Setting JSON to false
	I1117 14:43:29.245539    7979 mustload.go:65] Loading cluster: multinode-20211117144058-2140
	I1117 14:43:29.245770    7979 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:43:29.245782    7979 status.go:253] checking status of multinode-20211117144058-2140 ...
	I1117 14:43:29.246121    7979 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:29.357549    7979 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:29.357610    7979 status.go:328] multinode-20211117144058-2140 host status = "" (err=state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	)
	I1117 14:43:29.357626    7979 status.go:255] multinode-20211117144058-2140 status: &{Name:multinode-20211117144058-2140 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1117 14:43:29.357645    7979 status.go:258] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	E1117 14:43:29.357649    7979 status.go:261] The "multinode-20211117144058-2140" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:315: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr": multinode-20211117144058-2140
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:319: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-20211117144058-2140 status --alsologtostderr": multinode-20211117144058-2140
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "8f4b9cd9f88aa4112e7a7ac0655d4e7d800d39daf1ff804291325a59455aab64",
	        "Created": "2021-11-17T22:43:02.66107297Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (151.081773ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:43:29.621101    7988 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (69.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:326: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117144058-2140 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:336: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117144058-2140 --wait=true -v=8 --alsologtostderr --driver=docker : exit status 80 (1m9.277359909s)

                                                
                                                
-- stdout --
	* [multinode-20211117144058-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20211117144058-2140 in cluster multinode-20211117144058-2140
	* Pulling base image ...
	* docker "multinode-20211117144058-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-20211117144058-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:43:29.755938    7996 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:43:29.756065    7996 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:43:29.756071    7996 out.go:310] Setting ErrFile to fd 2...
	I1117 14:43:29.756074    7996 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:43:29.756155    7996 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:43:29.756403    7996 out.go:304] Setting JSON to false
	I1117 14:43:29.780828    7996 start.go:112] hostinfo: {"hostname":"37310.local","uptime":2584,"bootTime":1637186425,"procs":345,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:43:29.780931    7996 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:43:29.808293    7996 out.go:176] * [multinode-20211117144058-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:43:29.808503    7996 notify.go:174] Checking for updates...
	I1117 14:43:29.855732    7996 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:43:29.881728    7996 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:43:29.907627    7996 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:43:29.933575    7996 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:43:29.934892    7996 config.go:176] Loaded profile config "multinode-20211117144058-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:43:29.935225    7996 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:43:30.027873    7996 docker.go:132] docker version: linux-20.10.6
	I1117 14:43:30.028026    7996 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:43:30.204061    7996 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:43:30.150848178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:43:30.231160    7996 out.go:176] * Using the docker driver based on existing profile
	I1117 14:43:30.231196    7996 start.go:280] selected driver: docker
	I1117 14:43:30.231212    7996 start.go:775] validating driver "docker" against &{Name:multinode-20211117144058-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117144058-2140 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:43:30.231351    7996 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:43:30.231718    7996 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:43:30.412546    7996 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:43:30.358700246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:43:30.414538    7996 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 14:43:30.414566    7996 cni.go:93] Creating CNI manager for ""
	I1117 14:43:30.414572    7996 cni.go:154] 1 nodes found, recommending kindnet
	I1117 14:43:30.414581    7996 start_flags.go:282] config:
	{Name:multinode-20211117144058-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:multinode-20211117144058-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:43:30.463110    7996 out.go:176] * Starting control plane node multinode-20211117144058-2140 in cluster multinode-20211117144058-2140
	I1117 14:43:30.463200    7996 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:43:30.489213    7996 out.go:176] * Pulling base image ...
	I1117 14:43:30.489276    7996 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:43:30.489362    7996 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:43:30.489368    7996 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:43:30.489389    7996 cache.go:57] Caching tarball of preloaded images
	I1117 14:43:30.489606    7996 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 14:43:30.489635    7996 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.3 on docker
	I1117 14:43:30.490387    7996 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/multinode-20211117144058-2140/config.json ...
	I1117 14:43:30.606444    7996 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:43:30.606456    7996 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:43:30.606467    7996 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:43:30.606502    7996 start.go:313] acquiring machines lock for multinode-20211117144058-2140: {Name:mk8e725fd0df85c062d82279df9d95b56272d117 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:43:30.606575    7996 start.go:317] acquired machines lock for "multinode-20211117144058-2140" in 55.61µs
	I1117 14:43:30.606598    7996 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:43:30.606604    7996 fix.go:55] fixHost starting: 
	I1117 14:43:30.606845    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:30.718108    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:30.718176    7996 fix.go:108] recreateIfNeeded on multinode-20211117144058-2140: state= err=unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:30.718204    7996 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:43:30.745068    7996 out.go:176] * docker "multinode-20211117144058-2140" container is missing, will recreate.
	I1117 14:43:30.745128    7996 delete.go:124] DEMOLISHING multinode-20211117144058-2140 ...
	I1117 14:43:30.745385    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:30.859146    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:43:30.859187    7996 stop.go:75] unable to get state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:30.859200    7996 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:30.859600    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:30.969494    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:30.969537    7996 delete.go:82] Unable to get host status for multinode-20211117144058-2140, assuming it has already been deleted: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:30.969629    7996 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:43:31.081301    7996 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:31.081328    7996 kic.go:360] could not find the container multinode-20211117144058-2140 to remove it. will try anyways
	I1117 14:43:31.081414    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:31.194122    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:43:31.194163    7996 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:31.194258    7996 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0"
	W1117 14:43:31.304138    7996 cli_runner.go:162] docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:43:31.304162    7996 oci.go:658] error shutdown multinode-20211117144058-2140: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:32.304789    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:32.417259    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:32.417311    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:32.417324    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:43:32.417355    7996 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:32.970950    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:33.090454    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:33.090494    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:33.090504    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:43:33.090527    7996 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:34.171692    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:34.287806    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:34.287844    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:34.287851    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:43:34.287879    7996 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:35.607424    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:35.718049    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:35.718106    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:35.718117    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:43:35.718143    7996 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:37.310574    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:37.421298    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:37.421338    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:37.421346    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:43:37.421368    7996 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:39.772358    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:39.888045    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:39.888094    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:39.888119    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:43:39.888144    7996 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:44.404806    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:44.518149    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:44.518194    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:44.518203    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:43:44.518233    7996 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:47.741196    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:43:47.851724    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:43:47.851762    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:47.851768    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:43:47.851802    7996 oci.go:87] couldn't shut down multinode-20211117144058-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	 
	I1117 14:43:47.851880    7996 cli_runner.go:115] Run: docker rm -f -v multinode-20211117144058-2140
	I1117 14:43:47.969195    7996 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:43:48.080646    7996 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:48.080756    7996 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:43:48.194094    7996 cli_runner.go:115] Run: docker network rm multinode-20211117144058-2140
	I1117 14:43:50.927549    7996 cli_runner.go:168] Completed: docker network rm multinode-20211117144058-2140: (2.733366234s)
	W1117 14:43:50.927818    7996 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:43:50.927825    7996 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:43:51.938010    7996 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:43:51.985911    7996 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:43:51.986095    7996 start.go:160] libmachine.API.Create for "multinode-20211117144058-2140" (driver="docker")
	I1117 14:43:51.986132    7996 client.go:168] LocalClient.Create starting
	I1117 14:43:51.986309    7996 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:43:51.986449    7996 main.go:130] libmachine: Decoding PEM data...
	I1117 14:43:51.986483    7996 main.go:130] libmachine: Parsing certificate...
	I1117 14:43:51.986557    7996 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:43:51.986609    7996 main.go:130] libmachine: Decoding PEM data...
	I1117 14:43:51.986625    7996 main.go:130] libmachine: Parsing certificate...
	I1117 14:43:51.987707    7996 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:43:52.101336    7996 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:43:52.101442    7996 network_create.go:254] running [docker network inspect multinode-20211117144058-2140] to gather additional debugging logs...
	I1117 14:43:52.101459    7996 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140
	W1117 14:43:52.211497    7996 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:52.211522    7996 network_create.go:257] error running [docker network inspect multinode-20211117144058-2140]: docker network inspect multinode-20211117144058-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117144058-2140
	I1117 14:43:52.211533    7996 network_create.go:259] output of [docker network inspect multinode-20211117144058-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117144058-2140
	
	** /stderr **
	I1117 14:43:52.211620    7996 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:43:52.323626    7996 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000780240] misses:0}
	I1117 14:43:52.323666    7996 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:43:52.323687    7996 network_create.go:106] attempt to create docker network multinode-20211117144058-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:43:52.323757    7996 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140
	I1117 14:43:56.138181    7996 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140: (3.814293262s)
	I1117 14:43:56.138214    7996 network_create.go:90] docker network multinode-20211117144058-2140 192.168.49.0/24 created
	I1117 14:43:56.138233    7996 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20211117144058-2140" container
	I1117 14:43:56.138367    7996 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:43:56.246843    7996 cli_runner.go:115] Run: docker volume create multinode-20211117144058-2140 --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:43:56.355055    7996 oci.go:102] Successfully created a docker volume multinode-20211117144058-2140
	I1117 14:43:56.355178    7996 cli_runner.go:115] Run: docker run --rm --name multinode-20211117144058-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --entrypoint /usr/bin/test -v multinode-20211117144058-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:43:56.806081    7996 oci.go:106] Successfully prepared a docker volume multinode-20211117144058-2140
	E1117 14:43:56.806131    7996 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:43:56.806150    7996 client.go:171] LocalClient.Create took 4.819917081s
	I1117 14:43:56.806152    7996 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:43:56.806191    7996 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:43:56.806302    7996 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:43:58.807371    7996 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:43:58.807462    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:58.950390    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:58.950465    7996 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:59.100070    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:59.226143    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:59.226245    7996 retry.go:31] will retry after 300.341948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:43:59.526868    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:43:59.646457    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:43:59.646541    7996 retry.go:31] will retry after 571.057104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:00.217932    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:00.338740    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:44:00.338822    7996 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:44:00.338838    7996 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:00.338851    7996 start.go:129] duration metric: createHost completed in 8.400642131s
	I1117 14:44:00.338918    7996 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:44:00.338989    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:00.463863    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:00.463967    7996 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:00.650683    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:00.777942    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:00.778021    7996 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:01.109565    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:01.238022    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:01.238105    7996 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:01.698493    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:01.819543    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:44:01.819636    7996 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:44:01.819649    7996 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:01.819658    7996 fix.go:57] fixHost completed within 31.212462012s
	I1117 14:44:01.819665    7996 start.go:80] releasing machines lock for "multinode-20211117144058-2140", held for 31.212490054s
	W1117 14:44:01.819683    7996 start.go:532] error starting host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:44:01.819797    7996 out.go:241] ! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:44:01.819803    7996 start.go:547] Will try again in 5 seconds ...
	I1117 14:44:02.976680    7996 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (6.170225293s)
	I1117 14:44:02.976705    7996 kic.go:188] duration metric: took 6.170397 seconds to extract preloaded images to volume
	I1117 14:44:06.829624    7996 start.go:313] acquiring machines lock for multinode-20211117144058-2140: {Name:mk8e725fd0df85c062d82279df9d95b56272d117 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:44:06.829793    7996 start.go:317] acquired machines lock for "multinode-20211117144058-2140" in 138.335µs
	I1117 14:44:06.829837    7996 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:44:06.829845    7996 fix.go:55] fixHost starting: 
	I1117 14:44:06.830289    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:06.945835    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:06.945875    7996 fix.go:108] recreateIfNeeded on multinode-20211117144058-2140: state= err=unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:06.945885    7996 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:44:06.972871    7996 out.go:176] * docker "multinode-20211117144058-2140" container is missing, will recreate.
	I1117 14:44:06.972986    7996 delete.go:124] DEMOLISHING multinode-20211117144058-2140 ...
	I1117 14:44:06.973171    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:07.085123    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:44:07.085163    7996 stop.go:75] unable to get state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:07.085177    7996 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:07.085580    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:07.194074    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:07.194134    7996 delete.go:82] Unable to get host status for multinode-20211117144058-2140, assuming it has already been deleted: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:07.194754    7996 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:44:07.306148    7996 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:07.306176    7996 kic.go:360] could not find the container multinode-20211117144058-2140 to remove it. will try anyways
	I1117 14:44:07.306259    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:07.417312    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:44:07.417353    7996 oci.go:83] error getting container status, will try to delete anyways: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:07.417455    7996 cli_runner.go:115] Run: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0"
	W1117 14:44:07.526748    7996 cli_runner.go:162] docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:44:07.526775    7996 oci.go:658] error shutdown multinode-20211117144058-2140: docker exec --privileged -t multinode-20211117144058-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:08.529746    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:08.646048    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:08.646091    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:08.646109    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:44:08.646129    7996 retry.go:31] will retry after 391.517075ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:09.045211    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:09.158875    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:09.158914    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:09.158935    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:44:09.158954    7996 retry.go:31] will retry after 594.826393ms: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:09.758947    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:09.872945    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:09.872985    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:09.872995    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:44:09.873013    7996 retry.go:31] will retry after 1.326470261s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:11.201577    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:11.313430    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:11.313483    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:11.313492    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:44:11.313517    7996 retry.go:31] will retry after 1.212558276s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:12.532350    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:12.648658    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:12.648704    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:12.648716    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:44:12.648737    7996 retry.go:31] will retry after 1.779941781s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:14.431449    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:14.544543    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:14.544594    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:14.544605    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:44:14.544627    7996 retry.go:31] will retry after 3.268621161s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:17.814499    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:17.924977    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:17.925017    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:17.925027    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:44:17.925046    7996 retry.go:31] will retry after 6.097818893s: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:24.030412    7996 cli_runner.go:115] Run: docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}
	W1117 14:44:24.140475    7996 cli_runner.go:162] docker container inspect multinode-20211117144058-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:44:24.140520    7996 oci.go:670] temporary error verifying shutdown: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:24.140530    7996 oci.go:672] temporary error: container multinode-20211117144058-2140 status is  but expect it to be exited
	I1117 14:44:24.140554    7996 oci.go:87] couldn't shut down multinode-20211117144058-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	 
	I1117 14:44:24.140634    7996 cli_runner.go:115] Run: docker rm -f -v multinode-20211117144058-2140
	I1117 14:44:24.252185    7996 cli_runner.go:115] Run: docker container inspect -f {{.Id}} multinode-20211117144058-2140
	W1117 14:44:24.360611    7996 cli_runner.go:162] docker container inspect -f {{.Id}} multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:24.360754    7996 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:44:24.471534    7996 cli_runner.go:115] Run: docker network rm multinode-20211117144058-2140
	I1117 14:44:27.244395    7996 cli_runner.go:168] Completed: docker network rm multinode-20211117144058-2140: (2.772754072s)
	W1117 14:44:27.244669    7996 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:44:27.244676    7996 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:44:28.249947    7996 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:44:28.277329    7996 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:44:28.277531    7996 start.go:160] libmachine.API.Create for "multinode-20211117144058-2140" (driver="docker")
	I1117 14:44:28.277570    7996 client.go:168] LocalClient.Create starting
	I1117 14:44:28.277759    7996 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:44:28.277847    7996 main.go:130] libmachine: Decoding PEM data...
	I1117 14:44:28.277870    7996 main.go:130] libmachine: Parsing certificate...
	I1117 14:44:28.277961    7996 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:44:28.278017    7996 main.go:130] libmachine: Decoding PEM data...
	I1117 14:44:28.278032    7996 main.go:130] libmachine: Parsing certificate...
	I1117 14:44:28.278959    7996 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:44:28.395866    7996 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:44:28.395966    7996 network_create.go:254] running [docker network inspect multinode-20211117144058-2140] to gather additional debugging logs...
	I1117 14:44:28.395993    7996 cli_runner.go:115] Run: docker network inspect multinode-20211117144058-2140
	W1117 14:44:28.503845    7996 cli_runner.go:162] docker network inspect multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:28.503874    7996 network_create.go:257] error running [docker network inspect multinode-20211117144058-2140]: docker network inspect multinode-20211117144058-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20211117144058-2140
	I1117 14:44:28.503899    7996 network_create.go:259] output of [docker network inspect multinode-20211117144058-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20211117144058-2140
	
	** /stderr **
	I1117 14:44:28.504020    7996 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:44:28.616499    7996 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000780240] amended:false}} dirty:map[] misses:0}
	I1117 14:44:28.616532    7996 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:44:28.616700    7996 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000780240] amended:true}} dirty:map[192.168.49.0:0xc000780240 192.168.58.0:0xc000780198] misses:0}
	I1117 14:44:28.616712    7996 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:44:28.616719    7996 network_create.go:106] attempt to create docker network multinode-20211117144058-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:44:28.616801    7996 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140
	I1117 14:44:32.473639    7996 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20211117144058-2140: (3.85671766s)
	I1117 14:44:32.473661    7996 network_create.go:90] docker network multinode-20211117144058-2140 192.168.58.0/24 created
	I1117 14:44:32.473669    7996 kic.go:106] calculated static IP "192.168.58.2" for the "multinode-20211117144058-2140" container
	I1117 14:44:32.473769    7996 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:44:32.583927    7996 cli_runner.go:115] Run: docker volume create multinode-20211117144058-2140 --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:44:32.695307    7996 oci.go:102] Successfully created a docker volume multinode-20211117144058-2140
	I1117 14:44:32.695436    7996 cli_runner.go:115] Run: docker run --rm --name multinode-20211117144058-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20211117144058-2140 --entrypoint /usr/bin/test -v multinode-20211117144058-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:44:33.129101    7996 oci.go:106] Successfully prepared a docker volume multinode-20211117144058-2140
	E1117 14:44:33.129154    7996 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:44:33.129168    7996 client.go:171] LocalClient.Create took 4.851499596s
	I1117 14:44:33.129183    7996 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:44:33.129204    7996 kic.go:179] Starting extracting preloaded images to volume ...
	I1117 14:44:33.129364    7996 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20211117144058-2140:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
	I1117 14:44:35.129517    7996 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:44:35.129626    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:35.268077    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:35.268172    7996 retry.go:31] will retry after 198.278561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:35.466819    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:35.585374    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:35.585464    7996 retry.go:31] will retry after 298.905961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:35.884586    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:36.004970    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:36.005049    7996 retry.go:31] will retry after 704.572333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:36.709821    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:36.855100    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:44:36.855225    7996 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:44:36.855257    7996 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:36.855277    7996 start.go:129] duration metric: createHost completed in 8.60513847s
	I1117 14:44:36.855362    7996 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:44:36.855437    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:36.990665    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:36.990767    7996 retry.go:31] will retry after 341.333754ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:37.341383    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:37.466717    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:37.466805    7996 retry.go:31] will retry after 448.769687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:37.922623    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:38.054430    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	I1117 14:44:38.054509    7996 retry.go:31] will retry after 575.898922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:38.631542    7996 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140
	W1117 14:44:38.743431    7996 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140 returned with exit code 1
	W1117 14:44:38.743512    7996 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	W1117 14:44:38.743531    7996 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-20211117144058-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211117144058-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	I1117 14:44:38.743540    7996 fix.go:57] fixHost completed within 31.913090472s
	I1117 14:44:38.743549    7996 start.go:80] releasing machines lock for "multinode-20211117144058-2140", held for 31.913139966s
	W1117 14:44:38.743716    7996 out.go:241] * Failed to start docker container. Running "minikube delete -p multinode-20211117144058-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117144058-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:44:38.853999    7996 out.go:176] 
	W1117 14:44:38.854228    7996 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:44:38.854252    7996 out.go:241] * 
	* 
	W1117 14:44:38.855356    7996 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:44:38.968760    7996 out.go:176] 

                                                
                                                
** /stderr **
multinode_test.go:338: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-20211117144058-2140 --wait=true -v=8 --alsologtostderr --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-20211117144058-2140",
	        "Id": "6799f5e45f7d55ef83db1806a2ba2cfb5e00392dfc29e1538b949d0535b06a98",
	        "Created": "2021-11-17T22:44:28.742108081Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (157.378549ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:44:39.291091    8311 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (69.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (102.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:425: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211117144058-2140
multinode_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117144058-2140-m01 --driver=docker 
multinode_test.go:434: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117144058-2140-m01 --driver=docker : exit status 80 (45.729282411s)

                                                
                                                
-- stdout --
	* [multinode-20211117144058-2140-m01] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117144058-2140-m01 in cluster multinode-20211117144058-2140-m01
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	* docker "multinode-20211117144058-2140-m01" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:44:45.300450    8317 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:45:19.770294    8317 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117144058-2140-m01" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211117144058-2140-m02 --driver=docker 
multinode_test.go:442: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211117144058-2140-m02 --driver=docker : exit status 80 (46.469566081s)

                                                
                                                
-- stdout --
	* [multinode-20211117144058-2140-m02] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node multinode-20211117144058-2140-m02 in cluster multinode-20211117144058-2140-m02
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	* docker "multinode-20211117144058-2140-m02" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5897MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:45:31.499735    8550 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:46:06.123339    8550 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p multinode-20211117144058-2140-m02" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:444: failed to start profile. args "out/minikube-darwin-amd64 start -p multinode-20211117144058-2140-m02 --driver=docker " : exit status 80
multinode_test.go:449: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20211117144058-2140
multinode_test.go:449: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20211117144058-2140: exit status 80 (330.057801ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:454: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20211117144058-2140-m02
multinode_test.go:454: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20211117144058-2140-m02: (10.131001133s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20211117144058-2140
helpers_test.go:235: (dbg) docker inspect multinode-20211117144058-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2021-11-17T22:41:03Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-20211117144058-2140"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/multinode-20211117144058-2140/_data",
	        "Name": "multinode-20211117144058-2140",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20211117144058-2140 -n multinode-20211117144058-2140: exit status 7 (153.266477ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:46:22.237890    8852 status.go:247] status error: host: state: unknown state "multinode-20211117144058-2140": docker container inspect multinode-20211117144058-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: multinode-20211117144058-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-20211117144058-2140" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (102.97s)

                                                
                                    
x
+
TestPreload (49.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20211117144623-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
preload_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20211117144623-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 80 (45.380713766s)

                                                
                                                
-- stdout --
	* [test-preload-20211117144623-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node test-preload-20211117144623-2140 in cluster test-preload-20211117144623-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "test-preload-20211117144623-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:46:23.754093    8895 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:46:23.754225    8895 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:46:23.754231    8895 out.go:310] Setting ErrFile to fd 2...
	I1117 14:46:23.754234    8895 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:46:23.754308    8895 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:46:23.754611    8895 out.go:304] Setting JSON to false
	I1117 14:46:23.779267    8895 start.go:112] hostinfo: {"hostname":"37310.local","uptime":2758,"bootTime":1637186425,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:46:23.779370    8895 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:46:23.806749    8895 out.go:176] * [test-preload-20211117144623-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:46:23.806946    8895 notify.go:174] Checking for updates...
	I1117 14:46:23.854021    8895 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:46:23.880466    8895 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:46:23.906574    8895 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:46:23.933047    8895 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:46:23.933466    8895 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:46:23.933509    8895 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:46:24.028730    8895 docker.go:132] docker version: linux-20.10.6
	I1117 14:46:24.028852    8895 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:46:24.207054    8895 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:46:24.154919695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:46:24.240695    8895 out.go:176] * Using the docker driver based on user configuration
	I1117 14:46:24.240745    8895 start.go:280] selected driver: docker
	I1117 14:46:24.240759    8895 start.go:775] validating driver "docker" against <nil>
	I1117 14:46:24.240791    8895 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:46:24.244223    8895 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:46:24.422842    8895 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:46:24.370040149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:46:24.422953    8895 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:46:24.423070    8895 start_flags.go:758] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 14:46:24.423087    8895 cni.go:93] Creating CNI manager for ""
	I1117 14:46:24.423094    8895 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:46:24.423103    8895 start_flags.go:282] config:
	{Name:test-preload-20211117144623-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20211117144623-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:46:24.470469    8895 out.go:176] * Starting control plane node test-preload-20211117144623-2140 in cluster test-preload-20211117144623-2140
	I1117 14:46:24.470552    8895 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:46:24.496750    8895 out.go:176] * Pulling base image ...
	I1117 14:46:24.496828    8895 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 14:46:24.496906    8895 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:46:24.497083    8895 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/test-preload-20211117144623-2140/config.json ...
	I1117 14:46:24.497167    8895 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/test-preload-20211117144623-2140/config.json: {Name:mk4ccbc152ab3252bdcc9397f4e12b0247e67d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:46:24.497231    8895 cache.go:107] acquiring lock: {Name:mk21bca1056ae5ecf4c63221e96a2e33498df442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.497220    8895 cache.go:107] acquiring lock: {Name:mk25474a55302fe82d0cdb0c2c63bf43e7e10284 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.499343    8895 cache.go:107] acquiring lock: {Name:mk8d6d2c64a8cdcab754406b3d5e20ce5aa8cf9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.500167    8895 cache.go:107] acquiring lock: {Name:mke0c6e5942eb02f2871c546cd941f4ed70d18f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.499930    8895 cache.go:107] acquiring lock: {Name:mk9648a707202583c3c26d93f31c4e1696485e59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.500197    8895 cache.go:107] acquiring lock: {Name:mkdf326b9804d6d2539aa944961735bf611f459e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.500175    8895 cache.go:107] acquiring lock: {Name:mk5bd8731ec2f0170af1ecde9c6624c13e3407db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.500276    8895 cache.go:107] acquiring lock: {Name:mk930552e52d584dc0bc2b55bd9f15b63356d880 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.500281    8895 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I1117 14:46:24.500301    8895 cache.go:107] acquiring lock: {Name:mk6ed4774e490d74c36131020ba494e2a67495f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.500314    8895 cache.go:107] acquiring lock: {Name:mk172768f13415692c56edcfbab2a23942f6d0ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.500331    8895 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 3.123534ms
	I1117 14:46:24.500368    8895 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I1117 14:46:24.500472    8895 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1117 14:46:24.500495    8895 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I1117 14:46:24.500504    8895 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I1117 14:46:24.500500    8895 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.414727ms
	I1117 14:46:24.500516    8895 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 301.326µs
	I1117 14:46:24.500523    8895 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I1117 14:46:24.500559    8895 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I1117 14:46:24.500563    8895 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I1117 14:46:24.500544    8895 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1117 14:46:24.500497    8895 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I1117 14:46:24.500637    8895 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I1117 14:46:24.500706    8895 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I1117 14:46:24.500721    8895 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I1117 14:46:24.502353    8895 image.go:176] found k8s.gcr.io/etcd:3.4.3-0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:etcd} tag:3.4.3-0 original:k8s.gcr.io/etcd:3.4.3-0} opener:0xc0003b2070 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:46:24.502397    8895 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0
	I1117 14:46:24.502570    8895 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.17.0 original:k8s.gcr.io/kube-controller-manager:v1.17.0} opener:0xc0001aa150 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:46:24.502600    8895 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0
	I1117 14:46:24.503133    8895 image.go:176] found k8s.gcr.io/kube-apiserver:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.17.0 original:k8s.gcr.io/kube-apiserver:v1.17.0} opener:0xc000d34fc0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:46:24.503159    8895 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0
	I1117 14:46:24.503584    8895 image.go:176] found k8s.gcr.io/pause:3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:pause} tag:3.1 original:k8s.gcr.io/pause:3.1} opener:0xc0001aa380 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:46:24.503600    8895 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I1117 14:46:24.503822    8895 image.go:176] found k8s.gcr.io/coredns:1.6.5 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:coredns} tag:1.6.5 original:k8s.gcr.io/coredns:1.6.5} opener:0xc000d350a0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:46:24.503842    8895 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5
	I1117 14:46:24.504334    8895 image.go:176] found k8s.gcr.io/kube-scheduler:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.17.0 original:k8s.gcr.io/kube-scheduler:v1.17.0} opener:0xc000d35180 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:46:24.504368    8895 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0
	I1117 14:46:24.504676    8895 image.go:176] found k8s.gcr.io/kube-proxy:v1.17.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.17.0 original:k8s.gcr.io/kube-proxy:v1.17.0} opener:0xc0001aa4d0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:46:24.504700    8895 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0
	I1117 14:46:24.505749    8895 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 6.871799ms
	I1117 14:46:24.505997    8895 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0" took 8.792154ms
	I1117 14:46:24.506626    8895 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0" took 8.520077ms
	I1117 14:46:24.506825    8895 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 6.638841ms
	I1117 14:46:24.507058    8895 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 7.926727ms
	I1117 14:46:24.507339    8895 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0" took 7.150142ms
	I1117 14:46:24.507883    8895 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0" took 8.810169ms
	I1117 14:46:24.621101    8895 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:46:24.621118    8895 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:46:24.621129    8895 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:46:24.621155    8895 start.go:313] acquiring machines lock for test-preload-20211117144623-2140: {Name:mk47a177deb34e1b1e1bf474bfada8f9c8b54e57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:24.621288    8895 start.go:317] acquired machines lock for "test-preload-20211117144623-2140" in 121.99µs
	I1117 14:46:24.621314    8895 start.go:89] Provisioning new machine with config: &{Name:test-preload-20211117144623-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20211117144623-2140 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ControlPlane:true Worker:true}
	I1117 14:46:24.621370    8895 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:46:24.648493    8895 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:46:24.648792    8895 start.go:160] libmachine.API.Create for "test-preload-20211117144623-2140" (driver="docker")
	I1117 14:46:24.648831    8895 client.go:168] LocalClient.Create starting
	I1117 14:46:24.648997    8895 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:46:24.649075    8895 main.go:130] libmachine: Decoding PEM data...
	I1117 14:46:24.649102    8895 main.go:130] libmachine: Parsing certificate...
	I1117 14:46:24.649198    8895 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:46:24.649251    8895 main.go:130] libmachine: Decoding PEM data...
	I1117 14:46:24.649267    8895 main.go:130] libmachine: Parsing certificate...
	I1117 14:46:24.650272    8895 cli_runner.go:115] Run: docker network inspect test-preload-20211117144623-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:46:24.762560    8895 cli_runner.go:162] docker network inspect test-preload-20211117144623-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:46:24.762667    8895 network_create.go:254] running [docker network inspect test-preload-20211117144623-2140] to gather additional debugging logs...
	I1117 14:46:24.762685    8895 cli_runner.go:115] Run: docker network inspect test-preload-20211117144623-2140
	W1117 14:46:24.874082    8895 cli_runner.go:162] docker network inspect test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:46:24.874103    8895 network_create.go:257] error running [docker network inspect test-preload-20211117144623-2140]: docker network inspect test-preload-20211117144623-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20211117144623-2140
	I1117 14:46:24.874123    8895 network_create.go:259] output of [docker network inspect test-preload-20211117144623-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20211117144623-2140
	
	** /stderr **
	I1117 14:46:24.874204    8895 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:46:24.984836    8895 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000338330] misses:0}
	I1117 14:46:24.984876    8895 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:46:24.984890    8895 network_create.go:106] attempt to create docker network test-preload-20211117144623-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1117 14:46:24.984976    8895 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117144623-2140
	I1117 14:46:28.899350    8895 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117144623-2140: (3.914905032s)
	I1117 14:46:28.899378    8895 network_create.go:90] docker network test-preload-20211117144623-2140 192.168.49.0/24 created
	I1117 14:46:28.899400    8895 kic.go:106] calculated static IP "192.168.49.2" for the "test-preload-20211117144623-2140" container
	I1117 14:46:28.899538    8895 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:46:29.010178    8895 cli_runner.go:115] Run: docker volume create test-preload-20211117144623-2140 --label name.minikube.sigs.k8s.io=test-preload-20211117144623-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:46:29.123586    8895 oci.go:102] Successfully created a docker volume test-preload-20211117144623-2140
	I1117 14:46:29.123703    8895 cli_runner.go:115] Run: docker run --rm --name test-preload-20211117144623-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20211117144623-2140 --entrypoint /usr/bin/test -v test-preload-20211117144623-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:46:29.655585    8895 oci.go:106] Successfully prepared a docker volume test-preload-20211117144623-2140
	E1117 14:46:29.655664    8895 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:46:29.655672    8895 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 14:46:29.655702    8895 client.go:171] LocalClient.Create took 5.007594926s
	I1117 14:46:31.664435    8895 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:46:31.664595    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:46:31.780942    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:46:31.781043    8895 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:32.058797    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:46:32.172530    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:46:32.172611    8895 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:32.713055    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:46:32.830257    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:46:32.830339    8895 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:33.495645    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:46:33.607825    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	W1117 14:46:33.607896    8895 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	
	W1117 14:46:33.607909    8895 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:33.607920    8895 start.go:129] duration metric: createHost completed in 8.987703703s
	I1117 14:46:33.607926    8895 start.go:80] releasing machines lock for "test-preload-20211117144623-2140", held for 8.987788679s
	W1117 14:46:33.607941    8895 start.go:532] error starting host: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:46:33.608382    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:33.717814    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:33.717857    8895 delete.go:82] Unable to get host status for test-preload-20211117144623-2140, assuming it has already been deleted: state: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	W1117 14:46:33.717988    8895 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:46:33.718000    8895 start.go:547] Will try again in 5 seconds ...
	I1117 14:46:38.718320    8895 start.go:313] acquiring machines lock for test-preload-20211117144623-2140: {Name:mk47a177deb34e1b1e1bf474bfada8f9c8b54e57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:46:38.718484    8895 start.go:317] acquired machines lock for "test-preload-20211117144623-2140" in 129.162µs
	I1117 14:46:38.718522    8895 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:46:38.718536    8895 fix.go:55] fixHost starting: 
	I1117 14:46:38.719036    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:38.834743    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:38.834779    8895 fix.go:108] recreateIfNeeded on test-preload-20211117144623-2140: state= err=unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:38.834803    8895 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:46:38.861755    8895 out.go:176] * docker "test-preload-20211117144623-2140" container is missing, will recreate.
	I1117 14:46:38.861772    8895 delete.go:124] DEMOLISHING test-preload-20211117144623-2140 ...
	I1117 14:46:38.861904    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:38.972332    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:46:38.972373    8895 stop.go:75] unable to get state: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:38.972388    8895 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:38.972803    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:39.086821    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:39.086870    8895 delete.go:82] Unable to get host status for test-preload-20211117144623-2140, assuming it has already been deleted: state: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:39.086956    8895 cli_runner.go:115] Run: docker container inspect -f {{.Id}} test-preload-20211117144623-2140
	W1117 14:46:39.196683    8895 cli_runner.go:162] docker container inspect -f {{.Id}} test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:46:39.196722    8895 kic.go:360] could not find the container test-preload-20211117144623-2140 to remove it. will try anyways
	I1117 14:46:39.196801    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:39.307985    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:46:39.308023    8895 oci.go:83] error getting container status, will try to delete anyways: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:39.308107    8895 cli_runner.go:115] Run: docker exec --privileged -t test-preload-20211117144623-2140 /bin/bash -c "sudo init 0"
	W1117 14:46:39.416307    8895 cli_runner.go:162] docker exec --privileged -t test-preload-20211117144623-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:46:39.416330    8895 oci.go:658] error shutdown test-preload-20211117144623-2140: docker exec --privileged -t test-preload-20211117144623-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:40.419763    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:40.537049    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:40.537089    8895 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:40.537104    8895 oci.go:672] temporary error: container test-preload-20211117144623-2140 status is  but expect it to be exited
	I1117 14:46:40.537126    8895 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:41.003563    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:41.115201    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:41.115243    8895 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:41.115262    8895 oci.go:672] temporary error: container test-preload-20211117144623-2140 status is  but expect it to be exited
	I1117 14:46:41.115286    8895 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:42.007100    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:42.117738    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:42.117776    8895 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:42.117786    8895 oci.go:672] temporary error: container test-preload-20211117144623-2140 status is  but expect it to be exited
	I1117 14:46:42.117805    8895 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:42.754923    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:42.872628    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:42.872670    8895 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:42.872688    8895 oci.go:672] temporary error: container test-preload-20211117144623-2140 status is  but expect it to be exited
	I1117 14:46:42.872710    8895 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:43.980963    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:44.095838    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:44.095878    8895 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:44.095885    8895 oci.go:672] temporary error: container test-preload-20211117144623-2140 status is  but expect it to be exited
	I1117 14:46:44.095902    8895 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:45.617221    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:45.733409    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:45.733451    8895 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:45.733462    8895 oci.go:672] temporary error: container test-preload-20211117144623-2140 status is  but expect it to be exited
	I1117 14:46:45.733483    8895 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:48.784735    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:48.898228    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:48.898269    8895 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:48.898280    8895 oci.go:672] temporary error: container test-preload-20211117144623-2140 status is  but expect it to be exited
	I1117 14:46:48.898305    8895 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:54.688066    8895 cli_runner.go:115] Run: docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}
	W1117 14:46:54.800324    8895 cli_runner.go:162] docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:46:54.800373    8895 oci.go:670] temporary error verifying shutdown: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:46:54.800395    8895 oci.go:672] temporary error: container test-preload-20211117144623-2140 status is  but expect it to be exited
	I1117 14:46:54.800424    8895 oci.go:87] couldn't shut down test-preload-20211117144623-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	 
	I1117 14:46:54.800511    8895 cli_runner.go:115] Run: docker rm -f -v test-preload-20211117144623-2140
	I1117 14:46:54.910542    8895 cli_runner.go:115] Run: docker container inspect -f {{.Id}} test-preload-20211117144623-2140
	W1117 14:46:55.021088    8895 cli_runner.go:162] docker container inspect -f {{.Id}} test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:46:55.021208    8895 cli_runner.go:115] Run: docker network inspect test-preload-20211117144623-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:46:55.136509    8895 cli_runner.go:115] Run: docker network rm test-preload-20211117144623-2140
	I1117 14:46:57.960927    8895 cli_runner.go:168] Completed: docker network rm test-preload-20211117144623-2140: (2.824405665s)
	W1117 14:46:57.961189    8895 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:46:57.961196    8895 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:46:58.966482    8895 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:46:58.993674    8895 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:46:58.993893    8895 start.go:160] libmachine.API.Create for "test-preload-20211117144623-2140" (driver="docker")
	I1117 14:46:58.993930    8895 client.go:168] LocalClient.Create starting
	I1117 14:46:58.994113    8895 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:46:58.994202    8895 main.go:130] libmachine: Decoding PEM data...
	I1117 14:46:58.994221    8895 main.go:130] libmachine: Parsing certificate...
	I1117 14:46:58.994322    8895 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:46:59.015396    8895 main.go:130] libmachine: Decoding PEM data...
	I1117 14:46:59.015431    8895 main.go:130] libmachine: Parsing certificate...
	I1117 14:46:59.016496    8895 cli_runner.go:115] Run: docker network inspect test-preload-20211117144623-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:46:59.131793    8895 cli_runner.go:162] docker network inspect test-preload-20211117144623-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:46:59.131893    8895 network_create.go:254] running [docker network inspect test-preload-20211117144623-2140] to gather additional debugging logs...
	I1117 14:46:59.131911    8895 cli_runner.go:115] Run: docker network inspect test-preload-20211117144623-2140
	W1117 14:46:59.244698    8895 cli_runner.go:162] docker network inspect test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:46:59.244722    8895 network_create.go:257] error running [docker network inspect test-preload-20211117144623-2140]: docker network inspect test-preload-20211117144623-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20211117144623-2140
	I1117 14:46:59.244738    8895 network_create.go:259] output of [docker network inspect test-preload-20211117144623-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20211117144623-2140
	
	** /stderr **
	I1117 14:46:59.244823    8895 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1117 14:46:59.356549    8895 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000338330] amended:false}} dirty:map[] misses:0}
	I1117 14:46:59.356580    8895 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:46:59.356774    8895 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000338330] amended:true}} dirty:map[192.168.49.0:0xc000338330 192.168.58.0:0xc00072a148] misses:0}
	I1117 14:46:59.356787    8895 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:46:59.356793    8895 network_create.go:106] attempt to create docker network test-preload-20211117144623-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1117 14:46:59.356879    8895 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117144623-2140
	I1117 14:47:03.183115    8895 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true test-preload-20211117144623-2140: (3.826197913s)
	I1117 14:47:03.183144    8895 network_create.go:90] docker network test-preload-20211117144623-2140 192.168.58.0/24 created
	I1117 14:47:03.183162    8895 kic.go:106] calculated static IP "192.168.58.2" for the "test-preload-20211117144623-2140" container
	I1117 14:47:03.183323    8895 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1117 14:47:03.294367    8895 cli_runner.go:115] Run: docker volume create test-preload-20211117144623-2140 --label name.minikube.sigs.k8s.io=test-preload-20211117144623-2140 --label created_by.minikube.sigs.k8s.io=true
	I1117 14:47:03.403586    8895 oci.go:102] Successfully created a docker volume test-preload-20211117144623-2140
	I1117 14:47:03.403716    8895 cli_runner.go:115] Run: docker run --rm --name test-preload-20211117144623-2140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20211117144623-2140 --entrypoint /usr/bin/test -v test-preload-20211117144623-2140:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
	I1117 14:47:03.845817    8895 oci.go:106] Successfully prepared a docker volume test-preload-20211117144623-2140
	E1117 14:47:03.845875    8895 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	I1117 14:47:03.845875    8895 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1117 14:47:03.845886    8895 client.go:171] LocalClient.Create took 4.851970199s
	I1117 14:47:05.856263    8895 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:47:05.856407    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:47:05.972014    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:47:05.972096    8895 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:47:06.151873    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:47:06.267568    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:47:06.267657    8895 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:47:06.608253    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:47:06.724380    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:47:06.724467    8895 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:47:07.195007    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:47:07.308082    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	W1117 14:47:07.308169    8895 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	
	W1117 14:47:07.308200    8895 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:47:07.308214    8895 start.go:129] duration metric: createHost completed in 8.341731412s
	I1117 14:47:07.308271    8895 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:47:07.308353    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:47:07.418790    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:47:07.418863    8895 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:47:07.620116    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:47:07.732218    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:47:07.732305    8895 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:47:08.037031    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:47:08.153115    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	I1117 14:47:08.153212    8895 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:47:08.826853    8895 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140
	W1117 14:47:08.944079    8895 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140 returned with exit code 1
	W1117 14:47:08.944154    8895 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	
	W1117 14:47:08.944175    8895 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "test-preload-20211117144623-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20211117144623-2140: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140
	I1117 14:47:08.944187    8895 fix.go:57] fixHost completed within 30.226249216s
	I1117 14:47:08.944195    8895 start.go:80] releasing machines lock for "test-preload-20211117144623-2140", held for 30.226297814s
	W1117 14:47:08.944337    8895 out.go:241] * Failed to start docker container. Running "minikube delete -p test-preload-20211117144623-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p test-preload-20211117144623-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	I1117 14:47:08.992701    8895 out.go:176] 
	W1117 14:47:08.992913    8895 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	W1117 14:47:08.992930    8895 out.go:241] * 
	* 
	W1117 14:47:08.994058    8895 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:47:09.072801    8895 out.go:176] 

                                                
                                                
** /stderr **
preload_test.go:51: out/minikube-darwin-amd64 start -p test-preload-20211117144623-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 80
panic.go:642: *** TestPreload FAILED at 2021-11-17 14:47:09.103888 -0800 PST m=+1427.179732541
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20211117144623-2140
helpers_test.go:235: (dbg) docker inspect test-preload-20211117144623-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "test-preload-20211117144623-2140",
	        "Id": "b9f25029a084add5d6378cb37a065e9c24109b54284affb449340894a9cb0afd",
	        "Created": "2021-11-17T22:46:59.482287848Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20211117144623-2140 -n test-preload-20211117144623-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20211117144623-2140 -n test-preload-20211117144623-2140: exit status 7 (154.902135ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:47:09.374005    9112 status.go:247] status error: host: state: unknown state "test-preload-20211117144623-2140": docker container inspect test-preload-20211117144623-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: test-preload-20211117144623-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-20211117144623-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "test-preload-20211117144623-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20211117144623-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20211117144623-2140: (3.861523259s)
--- FAIL: TestPreload (49.52s)

                                                
                                    
x
+
TestScheduledStopUnix (48.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20211117144713-2140 --memory=2048 --driver=docker 
scheduled_stop_test.go:129: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-20211117144713-2140 --memory=2048 --driver=docker : exit status 80 (44.062797437s)

                                                
                                                
-- stdout --
	* [scheduled-stop-20211117144713-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node scheduled-stop-20211117144713-2140 in cluster scheduled-stop-20211117144713-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20211117144713-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:47:19.093084    9156 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:47:51.544044    9156 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20211117144713-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:131: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-20211117144713-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node scheduled-stop-20211117144713-2140 in cluster scheduled-stop-20211117144713-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "scheduled-stop-20211117144713-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:47:19.093084    9156 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:47:51.544044    9156 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p scheduled-stop-20211117144713-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:642: *** TestScheduledStopUnix FAILED at 2021-11-17 14:47:57.30066 -0800 PST m=+1475.376013450
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-20211117144713-2140
helpers_test.go:235: (dbg) docker inspect scheduled-stop-20211117144713-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-20211117144713-2140",
	        "Id": "d06553b93032d3ceb428cb38b6462bb3582c4887d2d4440baa961ef210c5f2f4",
	        "Created": "2021-11-17T22:47:46.959072732Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211117144713-2140 -n scheduled-stop-20211117144713-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211117144713-2140 -n scheduled-stop-20211117144713-2140: exit status 7 (150.707244ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:47:57.803517    9387 status.go:247] status error: host: state: unknown state "scheduled-stop-20211117144713-2140": docker container inspect scheduled-stop-20211117144713-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: scheduled-stop-20211117144713-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-20211117144713-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-20211117144713-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20211117144713-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20211117144713-2140: (4.127275591s)
--- FAIL: TestScheduledStopUnix (48.69s)

                                                
                                    
x
+
TestSkaffold (52.29s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe4041221676 version
skaffold_test.go:61: skaffold version: v1.35.0
skaffold_test.go:64: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20211117144801-2140 --memory=2600 --driver=docker 
skaffold_test.go:64: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-20211117144801-2140 --memory=2600 --driver=docker : exit status 80 (45.849507975s)

                                                
                                                
-- stdout --
	* [skaffold-20211117144801-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node skaffold-20211117144801-2140 in cluster skaffold-20211117144801-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* docker "skaffold-20211117144801-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:48:09.665329    9432 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:48:43.975340    9432 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p skaffold-20211117144801-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:66: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-20211117144801-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node skaffold-20211117144801-2140 in cluster skaffold-20211117144801-2140
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	* docker "skaffold-20211117144801-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2600MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:48:09.665329    9432 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	! StartHost failed, but will try again: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	E1117 14:48:43.975340    9432 oci.go:197] error getting kernel modules path: Unable to locate kernel modules
	* Failed to start docker container. Running "minikube delete -p skaffold-20211117144801-2140" may fix it: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: kernel modules: Unable to locate kernel modules
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:642: *** TestSkaffold FAILED at 2021-11-17 14:48:49.650913 -0800 PST m=+1527.725569122
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-20211117144801-2140
helpers_test.go:235: (dbg) docker inspect skaffold-20211117144801-2140:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-20211117144801-2140",
	        "Id": "142475097fb7b8c0f4e9030445db8d389f2c5830706e43a9090948abc9a4ad44",
	        "Created": "2021-11-17T22:48:39.518695422Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-20211117144801-2140 -n skaffold-20211117144801-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-20211117144801-2140 -n skaffold-20211117144801-2140: exit status 7 (173.107451ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:48:49.935900    9664 status.go:247] status error: host: state: unknown state "skaffold-20211117144801-2140": docker container inspect skaffold-20211117144801-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: skaffold-20211117144801-2140

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-20211117144801-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-20211117144801-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20211117144801-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20211117144801-2140: (4.280849944s)
--- FAIL: TestSkaffold (52.29s)

                                                
                                    
x
+
TestInsufficientStorage (13.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20211117144854-2140 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20211117144854-2140 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.791349577s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c15e4d86-98ed-4523-8f71-a6090053c556","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20211117144854-2140] minikube v1.24.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"57e9acdb-2e8a-4c37-aae0-942ee010d4d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"a22d2f5d-5a29-4183-9913-c32798af735f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig"}}
	{"specversion":"1.0","id":"3255dde0-99d8-48fa-be57-51dfa977a750","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7d9eb171-00c6-4fe1-83bc-10d5a6d68b0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube"}}
	{"specversion":"1.0","id":"548978df-b42f-4eb0-8c73-e2df523dc88d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0ef99527-0ab3-4f4a-a0c9-d2d243a2a1c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8a6f9ea-3be1-4110-8634-320f8bae5fb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20211117144854-2140 in cluster insufficient-storage-20211117144854-2140","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5150fd77-5cad-4773-a85d-75968d5f5cef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"99580c57-0e5c-4f1f-9ad8-a8fc0fde4c25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"49c22350-4338-4bff-bb3b-15b787f926c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:48:59.979341    9706 oci.go:197] error getting kernel modules path: Unable to locate kernel modules

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20211117144854-2140 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20211117144854-2140 --output=json --layout=cluster: exit status 7 (191.220625ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211117144854-2140","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":520,"StatusName":"Unknown"}},"Nodes":[{"Name":"insufficient-storage-20211117144854-2140","StatusCode":520,"StatusName":"Unknown","Components":{"apiserver":{"Name":"apiserver","StatusCode":520,"StatusName":"Unknown"},"kubelet":{"Name":"kubelet","StatusCode":520,"StatusName":"Unknown"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:49:02.200041    9771 status.go:258] status error: host: state: unknown state "insufficient-storage-20211117144854-2140": docker container inspect insufficient-storage-20211117144854-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: insufficient-storage-20211117144854-2140
	E1117 14:49:02.200053    9771 status.go:261] The "insufficient-storage-20211117144854-2140" host does not exist!

                                                
                                                
** /stderr **
status_test.go:99: incorrect node status code: 507
helpers_test.go:175: Cleaning up "insufficient-storage-20211117144854-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20211117144854-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20211117144854-2140: (5.128535736s)
--- FAIL: TestInsufficientStorage (13.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (116.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117145621-2140 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117145621-2140 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker : exit status 80 (1m40.65951068s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20211117145621-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kubernetes-upgrade-20211117145621-2140 in cluster kubernetes-upgrade-20211117145621-2140
	* Pulling base image ...
	* Downloading Kubernetes v1.14.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20211117145621-2140" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:56:21.200104   12552 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:56:21.200235   12552 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:56:21.200241   12552 out.go:310] Setting ErrFile to fd 2...
	I1117 14:56:21.200244   12552 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:56:21.200325   12552 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:56:21.200630   12552 out.go:304] Setting JSON to false
	I1117 14:56:21.227466   12552 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3356,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:56:21.227557   12552 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:56:21.253694   12552 out.go:176] * [kubernetes-upgrade-20211117145621-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:56:21.253777   12552 notify.go:174] Checking for updates...
	I1117 14:56:21.300395   12552 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:56:21.326585   12552 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:56:21.353552   12552 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:56:21.379392   12552 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:56:21.379836   12552 config.go:176] Loaded profile config "missing-upgrade-20211117145557-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 14:56:21.379932   12552 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:56:21.379966   12552 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:56:21.497418   12552 docker.go:132] docker version: linux-20.10.6
	I1117 14:56:21.497586   12552 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:56:21.720235   12552 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:61 SystemTime:2021-11-17 22:56:21.638238639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:56:21.767754   12552 out.go:176] * Using the docker driver based on user configuration
	I1117 14:56:21.767781   12552 start.go:280] selected driver: docker
	I1117 14:56:21.767792   12552 start.go:775] validating driver "docker" against <nil>
	I1117 14:56:21.767827   12552 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:56:21.770481   12552 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:56:21.994460   12552 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:61 SystemTime:2021-11-17 22:56:21.914905287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:56:21.994551   12552 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:56:21.994685   12552 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 14:56:21.994709   12552 cni.go:93] Creating CNI manager for ""
	I1117 14:56:21.994716   12552 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:56:21.994729   12552 start_flags.go:282] config:
	{Name:kubernetes-upgrade-20211117145621-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20211117145621-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:56:22.036867   12552 out.go:176] * Starting control plane node kubernetes-upgrade-20211117145621-2140 in cluster kubernetes-upgrade-20211117145621-2140
	I1117 14:56:22.036959   12552 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:56:22.062826   12552 out.go:176] * Pulling base image ...
	I1117 14:56:22.062882   12552 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 14:56:22.062945   12552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:56:22.132688   12552 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 14:56:22.132713   12552 cache.go:57] Caching tarball of preloaded images
	I1117 14:56:22.132926   12552 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 14:56:22.158837   12552 out.go:176] * Downloading Kubernetes v1.14.0 preload ...
	I1117 14:56:22.158884   12552 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 14:56:22.225961   12552 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 14:56:22.226074   12552 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 14:56:22.226090   12552 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory, skipping pull
	I1117 14:56:22.226094   12552 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in cache, skipping pull
	I1117 14:56:22.226103   12552 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
	I1117 14:56:22.226107   12552 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c from local cache
	I1117 14:56:22.249471   12552 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4?checksum=md5:ec855295d74f2fe00733f44cbe6bc00d -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4
	I1117 14:56:23.103510   12552 cache.go:170] failed to load gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c, will try remote image if available: error loading image: Error response from daemon: Bad response from Docker engine
	I1117 14:56:23.103529   12552 cache.go:172] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local daemon
	I1117 14:56:23.103881   12552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:56:23.221736   12552 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local daemon
	    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_______________________] ?% ? p/s 800msI1117 14:56:25.706794   12552 cache.go:180] failed to download gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c, will try fallback image if available: writing daemon image: error loading image: Error response from daemon: Bad response from Docker engine
	I1117 14:56:25.706810   12552 image.go:75] Checking for docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:56:25.822252   12552 cache.go:146] Downloading docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 14:56:25.822458   12552 image.go:59] Checking for docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 14:56:25.822495   12552 image.go:119] Writing docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	    > index.docker.io/kicbase/sta...: 24.36 KiB / 355.78 MiB [>_] 0.01% ? p/s ?    > index.docker.io/kicbase/sta...: 1.62 MiB / 355.78 MiB [>__] 0.45% ? p/s ?    > index.docker.io/kicbase/sta...: 7.80 MiB / 355.78 MiB [>__] 2.19% ? p/s ?    > index.docker.io/kicbase/sta...: 19.62 MiB / 355.78 MiB  5.51% 32.67 MiB p    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.66% 32.67 MiB pI1117 14:56:27.664787   12552 preload.go:248] saving checksum for preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 14:56:27.664935   12552 preload.go:255] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.67% 32.67 MiB p    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.67% 31.37 MiB p    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.67% 31.37 MiB pI1117 14:56:28.391128   12552 cache.go:60] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I1117 14:56:28.391204   12552 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/kubernetes-upgrade-20211117145621-2140/config.json ...
	I1117 14:56:28.391231   12552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/kubernetes-upgrade-20211117145621-2140/config.json: {Name:mk40037ac0f27499006744b6fa2af4f2e7ac2d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.67% 31.37 MiB p    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.67% 29.35 MiB p    > index.docker.io/kicbase/sta...: 27.28 MiB / 355.78 MiB  7.67% 29.35 MiB p    > index.docker.io/kicbase/sta...: 29.67 MiB / 355.78 MiB  8.34% 29.35 MiB p    > index.docker.io/kicbase/sta...: 47.41 MiB / 355.78 MiB  13.33% 29.64 MiB     > index.docker.io/kicbase/sta...: 48.76 MiB / 355.78 MiB  13.70% 29.64 MiB     > index.docker.io/kicbase/sta...: 48.76 MiB / 355.78 MiB  13.70% 29.64 MiB     > index.docker.io/kicbase/sta...: 48.76 MiB / 355.78 MiB  13.70% 27.87 MiB     > index.docker.io/kicbase/sta...: 48.95 MiB / 355.78 MiB  13.76% 27.87 MiB     > index.docker.io/kicbase/sta...: 50.42 MiB / 355.78 MiB  14.17% 27.87 MiB     > index.docker.io/kicbase/sta...: 54.44 MiB / 355.78 MiB  15.30% 26.69 MiB     > index.docker.io/kicbase/sta...: 58.14 MiB / 355.78 MiB  16.34% 26.69 MiB     > index.docker.io/kicbase/sta...: 67.08 MiB / 355.78 MiB  18.85
% 26.69 MiB     > index.docker.io/kicbase/sta...: 72.83 MiB / 355.78 MiB  20.47% 26.94 MiB     > index.docker.io/kicbase/sta...: 79.11 MiB / 355.78 MiB  22.23% 26.94 MiB     > index.docker.io/kicbase/sta...: 83.32 MiB / 355.78 MiB  23.42% 26.94 MiB     > index.docker.io/kicbase/sta...: 90.03 MiB / 355.78 MiB  25.30% 27.05 MiB     > index.docker.io/kicbase/sta...: 93.53 MiB / 355.78 MiB  26.29% 27.05 MiB     > index.docker.io/kicbase/sta...: 111.91 MiB / 355.78 MiB  31.46% 27.05 MiB    > index.docker.io/kicbase/sta...: 127.95 MiB / 355.78 MiB  35.96% 29.39 MiB    > index.docker.io/kicbase/sta...: 135.03 MiB / 355.78 MiB  37.95% 29.39 MiB    > index.docker.io/kicbase/sta...: 139.96 MiB / 355.78 MiB  39.34% 29.39 MiB    > index.docker.io/kicbase/sta...: 145.29 MiB / 355.78 MiB  40.84% 29.36 MiB    > index.docker.io/kicbase/sta...: 160.87 MiB / 355.78 MiB  45.21% 29.36 MiB    > index.docker.io/kicbase/sta...: 177.65 MiB / 355.78 MiB  49.93% 29.36 MiB    > index.docker.io/kicbase/sta...: 195.67 MiB / 355.78 MiB  5
5.00% 32.88 MiB    > index.docker.io/kicbase/sta...: 203.80 MiB / 355.78 MiB  57.28% 32.88 MiB    > index.docker.io/kicbase/sta...: 203.80 MiB / 355.78 MiB  57.28% 32.88 MiB    > index.docker.io/kicbase/sta...: 203.80 MiB / 355.78 MiB  57.28% 31.62 MiB    > index.docker.io/kicbase/sta...: 204.01 MiB / 355.78 MiB  57.34% 31.62 MiB    > index.docker.io/kicbase/sta...: 204.67 MiB / 355.78 MiB  57.53% 31.62 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 30.05 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 30.05 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 30.05 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 28.11 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 28.11 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 28.11 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 26.30 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB
58.49% 26.30 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 26.30 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 24.60 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 24.60 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 24.60 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 23.01 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 23.01 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 23.01 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 21.53 MiB    > index.docker.io/kicbase/sta...: 208.50 MiB / 355.78 MiB  58.60% 21.53 MiB    > index.docker.io/kicbase/sta...: 210.58 MiB / 355.78 MiB  59.19% 21.53 MiB    > index.docker.io/kicbase/sta...: 213.48 MiB / 355.78 MiB  60.00% 20.72 MiB    > index.docker.io/kicbase/sta...: 216.55 MiB / 355.78 MiB  60.87% 20.72 MiB    > index.docker.io/kicbase/sta...: 222.14 MiB / 355.78
MiB  62.44% 20.72 MiB    > index.docker.io/kicbase/sta...: 224.73 MiB / 355.78 MiB  63.17% 20.61 MiB    > index.docker.io/kicbase/sta...: 230.60 MiB / 355.78 MiB  64.82% 20.61 MiB    > index.docker.io/kicbase/sta...: 237.30 MiB / 355.78 MiB  66.70% 20.61 MiB    > index.docker.io/kicbase/sta...: 242.35 MiB / 355.78 MiB  68.12% 21.17 MiB    > index.docker.io/kicbase/sta...: 253.11 MiB / 355.78 MiB  71.14% 21.17 MiB    > index.docker.io/kicbase/sta...: 253.89 MiB / 355.78 MiB  71.36% 21.17 MiB    > index.docker.io/kicbase/sta...: 253.89 MiB / 355.78 MiB  71.36% 21.03 MiB    > index.docker.io/kicbase/sta...: 253.89 MiB / 355.78 MiB  71.36% 21.03 MiB    > index.docker.io/kicbase/sta...: 254.06 MiB / 355.78 MiB  71.41% 21.03 MiB    > index.docker.io/kicbase/sta...: 255.73 MiB / 355.78 MiB  71.88% 19.87 MiB    > index.docker.io/kicbase/sta...: 260.12 MiB / 355.78 MiB  73.11% 19.87 MiB    > index.docker.io/kicbase/sta...: 263.15 MiB / 355.78 MiB  73.96% 19.87 MiB    > index.docker.io/kicbase/sta...: 270.15 MiB / 355.
78 MiB  75.93% 20.14 MiB    > index.docker.io/kicbase/sta...: 273.65 MiB / 355.78 MiB  76.92% 20.14 MiB    > index.docker.io/kicbase/sta...: 280.69 MiB / 355.78 MiB  78.89% 20.14 MiB    > index.docker.io/kicbase/sta...: 284.75 MiB / 355.78 MiB  80.04% 20.41 MiB    > index.docker.io/kicbase/sta...: 291.46 MiB / 355.78 MiB  81.92% 20.41 MiB    > index.docker.io/kicbase/sta...: 297.69 MiB / 355.78 MiB  83.67% 20.41 MiB    > index.docker.io/kicbase/sta...: 315.45 MiB / 355.78 MiB  88.66% 22.40 MiB    > index.docker.io/kicbase/sta...: 329.34 MiB / 355.78 MiB  92.57% 22.40 MiB    > index.docker.io/kicbase/sta...: 330.05 MiB / 355.78 MiB  92.77% 22.40 MiB    > index.docker.io/kicbase/sta...: 330.05 MiB / 355.78 MiB  92.77% 22.51 MiB    > index.docker.io/kicbase/sta...: 330.05 MiB / 355.78 MiB  92.77% 22.51 MiB    > index.docker.io/kicbase/sta...: 330.74 MiB / 355.78 MiB  92.96% 22.51 MiB    > index.docker.io/kicbase/sta...: 333.36 MiB / 355.78 MiB  93.70% 21.41 MiB    > index.docker.io/kicbase/sta...: 336.29 MiB / 3
55.78 MiB  94.52% 21.41 MiB    > index.docker.io/kicbase/sta...: 338.90 MiB / 355.78 MiB  95.26% 21.41 MiB    > index.docker.io/kicbase/sta...: 343.29 MiB / 355.78 MiB  96.49% 21.11 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 21.11 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 21.11 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 21.06 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 21.06 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 21.06 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 19.70 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 19.70 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 19.70 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 18.43 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 18.43 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB
/ 355.78 MiB  99.99% 18.43 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 17.24 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 17.24 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 17.24 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 16.13 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 16.13 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 16.13 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 15.09 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 15.09 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 15.09 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 14.12 MiB    > index.docker.io/kicbase/sta...: 355.76 MiB / 355.78 MiB  99.99% 14.12 MiB    > index.docker.io/kicbase/sta...: 355.76 MiB / 355.78 MiB  99.99% 14.12 MiB    > index.docker.io/kicbase/sta...: 355.76 M
iB / 355.78 MiB  99.99% 13.21 MiB    > index.docker.io/kicbase/sta...: 355.76 MiB / 355.78 MiB  99.99% 13.21 MiB    > index.docker.io/kicbase/sta...: 355.76 MiB / 355.78 MiB  99.99% 13.21 MiB    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 12.36 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 12.36 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 12.36 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 11.56 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 11.56 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 11.56 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 10.81 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 10.81 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 10.81 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 10.12 Mi    > index.docker.io/kicbase/sta...: 355.7
8 MiB / 355.78 MiB  100.00% 14.45 MiI1117 14:56:51.450931   12552 cache.go:149] successfully saved docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
	I1117 14:56:51.450946   12552 cache.go:160] Loading docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c from local cache
	I1117 14:56:52.281845   12552 cache.go:170] failed to load docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c, will try remote image if available: error loading image: Error response from daemon: Bad response from Docker engine
	I1117 14:56:52.281857   12552 cache.go:172] Downloading docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local daemon
	I1117 14:56:52.282002   12552 image.go:75] Checking for docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:56:52.403058   12552 image.go:243] Writing docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local daemon
	    > index.docker.io/kicbase/sta...: 0 B [________________________] ?% ? p/s ?    > index.docker.io/kicbase/sta...: 0 B [____________________] ?% ? p/s 200msI1117 14:56:53.195955   12552 cache.go:180] failed to download docker.io/kicbase/stable:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c, will try fallback image if available: writing daemon image: error loading image: Error response from daemon: Bad response from Docker engine
	I1117 14:56:53.195970   12552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28 in local docker daemon
	I1117 14:56:53.310011   12552 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28 to local cache
	I1117 14:56:53.310224   12552 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28 in local cache directory
	I1117 14:56:53.310265   12552 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.28 to local cache
	    > gcr.io/k8s-minikube/kicbase...: 0 B [________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...: 24.36 KiB / 355.78 MiB [>_] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...: 24.36 KiB / 355.78 MiB [>_] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...: 6.52 MiB / 355.78 MiB  1.83% 10.93 MiB p/    > gcr.io/k8s-minikube/kicbase...: 24.31 MiB / 355.78 MiB  6.83% 10.93 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.27 MiB / 355.78 MiB  7.66% 10.93 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.27 MiB / 355.78 MiB  7.67% 12.42 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.27 MiB / 355.78 MiB  7.67% 12.42 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.27 MiB / 355.78 MiB  7.67% 12.42 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.27 MiB / 355.78 MiB  7.67% 11.62 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.27 MiB / 355.78 MiB  7.67% 11.62 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.27 MiB / 355.78 MiB  7.67% 11.62 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.28 MiB / 355.78 MiB  7.67%
10.87 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.28 MiB / 355.78 MiB  7.67% 10.87 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.28 MiB / 355.78 MiB  7.67% 10.87 MiB p    > gcr.io/k8s-minikube/kicbase...: 27.28 MiB / 355.78 MiB  7.67% 10.17 MiB p    > gcr.io/k8s-minikube/kicbase...: 35.28 MiB / 355.78 MiB  9.92% 10.17 MiB p    > gcr.io/k8s-minikube/kicbase...: 42.13 MiB / 355.78 MiB  11.84% 10.17 MiB     > gcr.io/k8s-minikube/kicbase...: 48.76 MiB / 355.78 MiB  13.70% 11.83 MiB     > gcr.io/k8s-minikube/kicbase...: 48.76 MiB / 355.78 MiB  13.70% 11.83 MiB     > gcr.io/k8s-minikube/kicbase...: 48.76 MiB / 355.78 MiB  13.70% 11.83 MiB     > gcr.io/k8s-minikube/kicbase...: 56.76 MiB / 355.78 MiB  15.95% 11.94 MiB     > gcr.io/k8s-minikube/kicbase...: 67.83 MiB / 355.78 MiB  19.06% 11.94 MiB     > gcr.io/k8s-minikube/kicbase...: 67.83 MiB / 355.78 MiB  19.06% 11.94 MiB     > gcr.io/k8s-minikube/kicbase...: 67.83 MiB / 355.78 MiB  19.06% 12.35 MiB     > gcr.io/k8s-minikube/kicbase...: 67.83 MiB / 355.78 MiB  19
.06% 12.35 MiB     > gcr.io/k8s-minikube/kicbase...: 75.83 MiB / 355.78 MiB  21.31% 12.35 MiB     > gcr.io/k8s-minikube/kicbase...: 90.03 MiB / 355.78 MiB  25.30% 13.96 MiB     > gcr.io/k8s-minikube/kicbase...: 92.06 MiB / 355.78 MiB  25.87% 13.96 MiB     > gcr.io/k8s-minikube/kicbase...: 92.06 MiB / 355.78 MiB  25.87% 13.96 MiB     > gcr.io/k8s-minikube/kicbase...: 92.84 MiB / 355.78 MiB  26.09% 13.36 MiB     > gcr.io/k8s-minikube/kicbase...: 100.06 MiB / 355.78 MiB  28.12% 13.36 MiB    > gcr.io/k8s-minikube/kicbase...: 112.18 MiB / 355.78 MiB  31.53% 13.36 MiB    > gcr.io/k8s-minikube/kicbase...: 135.53 MiB / 355.78 MiB  38.09% 17.09 MiB    > gcr.io/k8s-minikube/kicbase...: 156.57 MiB / 355.78 MiB  44.01% 17.09 MiB    > gcr.io/k8s-minikube/kicbase...: 177.45 MiB / 355.78 MiB  49.88% 17.09 MiB    > gcr.io/k8s-minikube/kicbase...: 196.60 MiB / 355.78 MiB  55.26% 22.55 MiB    > gcr.io/k8s-minikube/kicbase...: 203.80 MiB / 355.78 MiB  57.28% 22.55 MiB    > gcr.io/k8s-minikube/kicbase...: 204.67 MiB / 355.78 MiB
57.53% 22.55 MiB    > gcr.io/k8s-minikube/kicbase...: 204.67 MiB / 355.78 MiB  57.53% 21.96 MiB    > gcr.io/k8s-minikube/kicbase...: 208.11 MiB / 355.78 MiB  58.49% 21.96 MiB    > gcr.io/k8s-minikube/kicbase...: 208.11 MiB / 355.78 MiB  58.49% 21.96 MiB    > gcr.io/k8s-minikube/kicbase...: 208.11 MiB / 355.78 MiB  58.49% 20.91 MiB    > gcr.io/k8s-minikube/kicbase...: 220.92 MiB / 355.78 MiB  62.10% 20.91 MiB    > gcr.io/k8s-minikube/kicbase...: 241.64 MiB / 355.78 MiB  67.92% 20.91 MiB    > gcr.io/k8s-minikube/kicbase...: 253.89 MiB / 355.78 MiB  71.36% 24.47 MiB    > gcr.io/k8s-minikube/kicbase...: 253.89 MiB / 355.78 MiB  71.36% 24.47 MiB    > gcr.io/k8s-minikube/kicbase...: 268.36 MiB / 355.78 MiB  75.43% 24.47 MiB    > gcr.io/k8s-minikube/kicbase...: 290.01 MiB / 355.78 MiB  81.51% 26.79 MiB    > gcr.io/k8s-minikube/kicbase...: 297.69 MiB / 355.78 MiB  83.67% 26.79 MiB    > gcr.io/k8s-minikube/kicbase...: 297.69 MiB / 355.78 MiB  83.67% 26.79 MiB    > gcr.io/k8s-minikube/kicbase...: 317.55 MiB / 355.78
MiB  89.26% 28.02 MiB    > gcr.io/k8s-minikube/kicbase...: 330.05 MiB / 355.78 MiB  92.77% 28.02 MiB    > gcr.io/k8s-minikube/kicbase...: 330.05 MiB / 355.78 MiB  92.77% 28.02 MiB    > gcr.io/k8s-minikube/kicbase...: 340.37 MiB / 355.78 MiB  95.67% 28.67 MiB    > gcr.io/k8s-minikube/kicbase...: 355.74 MiB / 355.78 MiB  99.99% 28.67 MiB    > gcr.io/k8s-minikube/kicbase...: 355.74 MiB / 355.78 MiB  99.99% 28.67 MiB    > gcr.io/k8s-minikube/kicbase...: 355.74 MiB / 355.78 MiB  99.99% 28.46 MiB    > gcr.io/k8s-minikube/kicbase...: 355.74 MiB / 355.78 MiB  99.99% 28.46 MiB    > gcr.io/k8s-minikube/kicbase...: 355.74 MiB / 355.78 MiB  99.99% 28.46 MiB    > gcr.io/k8s-minikube/kicbase...: 355.74 MiB / 355.78 MiB  99.99% 26.63 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 26.63 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 26.63 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 24.91 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.
78 MiB  99.99% 24.91 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 24.91 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 23.30 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 23.30 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 23.30 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 21.80 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 21.80 MiB    > gcr.io/k8s-minikube/kicbase...: 355.75 MiB / 355.78 MiB  99.99% 21.80 MiB    > gcr.io/k8s-minikube/kicbase...: 355.76 MiB / 355.78 MiB  99.99% 20.39 MiB    > gcr.io/k8s-minikube/kicbase...: 355.76 MiB / 355.78 MiB  99.99% 20.39 MiB    > gcr.io/k8s-minikube/kicbase...: 355.76 MiB / 355.78 MiB  99.99% 20.39 MiB    > gcr.io/k8s-minikube/kicbase...: 355.76 MiB / 355.78 MiB  99.99% 19.08 MiB    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 355.78 MiB  100.00% 19.08 Mi    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 3
55.78 MiB  100.00% 19.08 Mi    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 355.78 MiB  100.00% 17.85 Mi    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 355.78 MiB  100.00% 17.85 Mi    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 355.78 MiB  100.00% 17.85 Mi    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 355.78 MiB  100.00% 16.70 Mi    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 355.78 MiB  100.00% 16.70 Mi    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 355.78 MiB  100.00% 16.70 Mi    > gcr.io/k8s-minikube/kicbase...: 355.77 MiB / 355.78 MiB  100.00% 15.62 Mi    > gcr.io/k8s-minikube/kicbase...: 355.78 MiB / 355.78 MiB  100.00% 20.97 MiI1117 14:57:11.773923   12552 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28 as a tarball
	I1117 14:57:11.773954   12552 cache.go:160] Loading gcr.io/k8s-minikube/kicbase:v0.0.28 from local cache
	I1117 14:57:12.602562   12552 cache.go:170] failed to load gcr.io/k8s-minikube/kicbase:v0.0.28, will try remote image if available: error loading image: Error response from daemon: Bad response from Docker engine
	I1117 14:57:12.602581   12552 cache.go:172] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28 to local daemon
	I1117 14:57:12.602745   12552 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28 in local docker daemon
	I1117 14:57:12.719130   12552 image.go:243] Writing gcr.io/k8s-minikube/kicbase:v0.0.28 to local daemon
	    > gcr.io/k8s-minikube/kicbase...: 0 B [________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...: 0 B [________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...: 0 B [________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...: 0 B [____________________] ?% ? p/s 600msI1117 14:57:14.577506   12552 cache.go:180] failed to download gcr.io/k8s-minikube/kicbase:v0.0.28, will try fallback image if available: writing daemon image: error loading image: Error response from daemon: Bad response from Docker engine
	I1117 14:57:14.577525   12552 image.go:75] Checking for docker.io/kicbase/stable:v0.0.28 in local docker daemon
	I1117 14:57:14.697983   12552 cache.go:146] Downloading docker.io/kicbase/stable:v0.0.28 to local cache
	I1117 14:57:14.698174   12552 image.go:59] Checking for docker.io/kicbase/stable:v0.0.28 in local cache directory
	I1117 14:57:14.698206   12552 image.go:119] Writing docker.io/kicbase/stable:v0.0.28 to local cache
	    > index.docker.io/kicbase/sta...: 24.36 KiB / 355.78 MiB [>_] 0.01% ? p/s ?    > index.docker.io/kicbase/sta...: 11.35 MiB / 355.78 MiB [>_] 3.19% ? p/s ?    > index.docker.io/kicbase/sta...: 22.09 MiB / 355.78 MiB [>_] 6.21% ? p/s ?    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.66% 45.48 MiB p    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.67% 45.48 MiB p    > index.docker.io/kicbase/sta...: 27.27 MiB / 355.78 MiB  7.67% 45.48 MiB p    > index.docker.io/kicbase/sta...: 27.28 MiB / 355.78 MiB  7.67% 42.54 MiB p    > index.docker.io/kicbase/sta...: 30.47 MiB / 355.78 MiB  8.57% 42.54 MiB p    > index.docker.io/kicbase/sta...: 32.34 MiB / 355.78 MiB  9.09% 42.54 MiB p    > index.docker.io/kicbase/sta...: 36.38 MiB / 355.78 MiB  10.22% 40.78 MiB     > index.docker.io/kicbase/sta...: 41.41 MiB / 355.78 MiB  11.64% 40.78 MiB     > index.docker.io/kicbase/sta...: 48.76 MiB / 355.78 MiB  13.70% 40.78 MiB     > index.docker.io/kicbase/sta...: 64.33 MiB / 355.78 MiB  18.08
% 41.15 MiB     > index.docker.io/kicbase/sta...: 68.72 MiB / 355.78 MiB  19.32% 41.15 MiB     > index.docker.io/kicbase/sta...: 76.88 MiB / 355.78 MiB  21.61% 41.15 MiB     > index.docker.io/kicbase/sta...: 84.92 MiB / 355.78 MiB  23.87% 40.71 MiB     > index.docker.io/kicbase/sta...: 92.06 MiB / 355.78 MiB  25.87% 40.71 MiB     > index.docker.io/kicbase/sta...: 109.41 MiB / 355.78 MiB  30.75% 40.71 MiB    > index.docker.io/kicbase/sta...: 125.45 MiB / 355.78 MiB  35.26% 42.45 MiB    > index.docker.io/kicbase/sta...: 137.83 MiB / 355.78 MiB  38.74% 42.45 MiB    > index.docker.io/kicbase/sta...: 144.27 MiB / 355.78 MiB  40.55% 42.45 MiB    > index.docker.io/kicbase/sta...: 147.30 MiB / 355.78 MiB  41.40% 42.06 MiB    > index.docker.io/kicbase/sta...: 152.75 MiB / 355.78 MiB  42.93% 42.06 MiB    > index.docker.io/kicbase/sta...: 164.78 MiB / 355.78 MiB  46.31% 42.06 MiB    > index.docker.io/kicbase/sta...: 183.00 MiB / 355.78 MiB  51.44% 43.18 MiB    > index.docker.io/kicbase/sta...: 201.31 MiB / 355.78 MiB  5
6.58% 43.18 MiB    > index.docker.io/kicbase/sta...: 204.67 MiB / 355.78 MiB  57.53% 43.18 MiB    > index.docker.io/kicbase/sta...: 208.11 MiB / 355.78 MiB  58.49% 43.10 MiB    > index.docker.io/kicbase/sta...: 212.27 MiB / 355.78 MiB  59.66% 43.10 MiB    > index.docker.io/kicbase/sta...: 230.64 MiB / 355.78 MiB  64.83% 43.10 MiB    > index.docker.io/kicbase/sta...: 249.34 MiB / 355.78 MiB  70.08% 44.75 MiB    > index.docker.io/kicbase/sta...: 258.86 MiB / 355.78 MiB  72.76% 44.75 MiB    > index.docker.io/kicbase/sta...: 273.38 MiB / 355.78 MiB  76.84% 44.75 MiB    > index.docker.io/kicbase/sta...: 282.82 MiB / 355.78 MiB  79.49% 45.46 MiB    > index.docker.io/kicbase/sta...: 287.70 MiB / 355.78 MiB  80.86% 45.46 MiB    > index.docker.io/kicbase/sta...: 297.69 MiB / 355.78 MiB  83.67% 45.46 MiB    > index.docker.io/kicbase/sta...: 305.85 MiB / 355.78 MiB  85.97% 45.01 MiB    > index.docker.io/kicbase/sta...: 319.52 MiB / 355.78 MiB  89.81% 45.01 MiB    > index.docker.io/kicbase/sta...: 330.05 MiB / 355.78 MiB
92.77% 45.01 MiB    > index.docker.io/kicbase/sta...: 340.89 MiB / 355.78 MiB  95.82% 45.87 MiB    > index.docker.io/kicbase/sta...: 350.83 MiB / 355.78 MiB  98.61% 45.87 MiB    > index.docker.io/kicbase/sta...: 354.20 MiB / 355.78 MiB  99.56% 45.87 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 44.51 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 44.51 MiB    > index.docker.io/kicbase/sta...: 355.74 MiB / 355.78 MiB  99.99% 44.51 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 41.64 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 41.64 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 41.64 MiB    > index.docker.io/kicbase/sta...: 355.75 MiB / 355.78 MiB  99.99% 38.95 MiB    > index.docker.io/kicbase/sta...: 355.76 MiB / 355.78 MiB  99.99% 38.95 MiB    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 38.95 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78
MiB  100.00% 36.44 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 36.44 Mi    > index.docker.io/kicbase/sta...: 355.77 MiB / 355.78 MiB  100.00% 36.44 Mi    > index.docker.io/kicbase/sta...: 355.78 MiB / 355.78 MiB  100.00% 33.45 MiI1117 14:57:26.307816   12552 cache.go:149] successfully saved docker.io/kicbase/stable:v0.0.28 as a tarball
	I1117 14:57:26.307828   12552 cache.go:160] Loading docker.io/kicbase/stable:v0.0.28 from local cache
	I1117 14:57:27.133912   12552 cache.go:170] failed to load docker.io/kicbase/stable:v0.0.28, will try remote image if available: error loading image: Error response from daemon: Bad response from Docker engine
	I1117 14:57:27.133921   12552 cache.go:172] Downloading docker.io/kicbase/stable:v0.0.28 to local daemon
	I1117 14:57:27.134090   12552 image.go:75] Checking for docker.io/kicbase/stable:v0.0.28 in local docker daemon
	I1117 14:57:27.254511   12552 image.go:243] Writing docker.io/kicbase/stable:v0.0.28 to local daemon
	    > index.docker.io/kicbase/sta...: 0 B [________________________] ?% ? p/s ?    > index.docker.io/kicbase/sta...: 0 B [____________________] ?% ? p/s 300msI1117 14:57:28.080642   12552 cache.go:180] failed to download docker.io/kicbase/stable:v0.0.28, will try fallback image if available: writing daemon image: error loading image: Error response from daemon: Bad response from Docker engine
	W1117 14:57:28.080674   12552 out.go:241] ! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.28, but successfully downloaded docker.io/kicbase/stable:v0.0.28 as a fallback image
	! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.28, but successfully downloaded docker.io/kicbase/stable:v0.0.28 as a fallback image
	E1117 14:57:28.080696   12552 cache.go:201] Error downloading kic artifacts:  failed to download kic base image or any fallback image
	I1117 14:57:28.080703   12552 cache.go:206] Successfully downloaded all kic artifacts
	I1117 14:57:28.080742   12552 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117145621-2140: {Name:mk4b31c179a94f554967451979ba16c7780f7f57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:57:28.080977   12552 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117145621-2140" in 219.49µs
	I1117 14:57:28.081017   12552 start.go:89] Provisioning new machine with config: &{Name:kubernetes-upgrade-20211117145621-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:docker.io/kicbase/stable:v0.0.28 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20211117145621-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I1117 14:57:28.081077   12552 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:57:28.129138   12552 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:57:28.129504   12552 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20211117145621-2140" (driver="docker")
	I1117 14:57:28.129563   12552 client.go:168] LocalClient.Create starting
	I1117 14:57:28.129747   12552 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:57:28.150256   12552 main.go:130] libmachine: Decoding PEM data...
	I1117 14:57:28.150279   12552 main.go:130] libmachine: Parsing certificate...
	I1117 14:57:28.150353   12552 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:57:28.150398   12552 main.go:130] libmachine: Decoding PEM data...
	I1117 14:57:28.150409   12552 main.go:130] libmachine: Parsing certificate...
	I1117 14:57:28.151100   12552 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117145621-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:57:28.267463   12552 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117145621-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:57:28.267571   12552 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117145621-2140] to gather additional debugging logs...
	I1117 14:57:28.267590   12552 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117145621-2140
	W1117 14:57:28.383370   12552 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:28.383404   12552 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117145621-2140]: docker network inspect kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:28.383423   12552 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117145621-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I1117 14:57:28.383525   12552 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:57:28.496388   12552 cli_runner.go:162] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:57:28.496487   12552 network_create.go:254] running [docker network inspect bridge] to gather additional debugging logs...
	I1117 14:57:28.496509   12552 cli_runner.go:115] Run: docker network inspect bridge
	W1117 14:57:28.612724   12552 cli_runner.go:162] docker network inspect bridge returned with exit code 1
	I1117 14:57:28.612749   12552 network_create.go:257] error running [docker network inspect bridge]: docker network inspect bridge: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:28.612774   12552 network_create.go:259] output of [docker network inspect bridge]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W1117 14:57:28.612781   12552 network_create.go:75] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:28.613031   12552 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00013e460] misses:0}
	I1117 14:57:28.613054   12552 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:57:28.613068   12552 network_create.go:106] attempt to create docker network kubernetes-upgrade-20211117145621-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ...
	I1117 14:57:28.613158   12552 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140
	W1117 14:57:28.731206   12552 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	E1117 14:57:28.731256   12552 network_create.go:95] error while trying to create docker network kubernetes-upgrade-20211117145621-2140 192.168.49.0/24: create docker network kubernetes-upgrade-20211117145621-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 14:57:28.731367   12552 out.go:241] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20211117145621-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20211117145621-2140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 14:57:28.731487   12552 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	W1117 14:57:28.845885   12552 cli_runner.go:162] docker ps -a --format {{.Names}} returned with exit code 1
	W1117 14:57:28.846017   12552 kic.go:149] failed to check if container already exists: docker ps -a --format {{.Names}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:28.846123   12552 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true
	W1117 14:57:28.965098   12552 cli_runner.go:162] docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I1117 14:57:28.965140   12552 client.go:171] LocalClient.Create took 835.54985ms
	I1117 14:57:30.975642   12552 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:57:30.975779   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:57:31.093013   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:31.093094   12552 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:31.370427   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:57:31.487074   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:31.487155   12552 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:32.034174   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:57:32.154771   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:32.154845   12552 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:32.812169   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:57:32.928893   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	W1117 14:57:32.928970   12552 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 14:57:32.928998   12552 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:32.929010   12552 start.go:129] duration metric: createHost completed in 4.84786332s
	I1117 14:57:32.929017   12552 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117145621-2140", held for 4.847963814s
	W1117 14:57:32.929060   12552 start.go:532] error starting host: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117145621-2140 container: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:32.929612   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:33.044169   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:33.044216   12552 delete.go:82] Unable to get host status for kubernetes-upgrade-20211117145621-2140, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 14:57:33.044389   12552 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117145621-2140 container: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117145621-2140 container: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 14:57:33.044402   12552 start.go:547] Will try again in 5 seconds ...
	I1117 14:57:38.051980   12552 start.go:313] acquiring machines lock for kubernetes-upgrade-20211117145621-2140: {Name:mk4b31c179a94f554967451979ba16c7780f7f57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:57:38.052142   12552 start.go:317] acquired machines lock for "kubernetes-upgrade-20211117145621-2140" in 127.471µs
	I1117 14:57:38.052177   12552 start.go:93] Skipping create...Using existing machine configuration
	I1117 14:57:38.052190   12552 fix.go:55] fixHost starting: 
	I1117 14:57:38.052616   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:38.169997   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:38.170037   12552 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20211117145621-2140: state= err=unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:38.170054   12552 fix.go:113] machineExists: false. err=machine does not exist
	I1117 14:57:38.196910   12552 out.go:176] * docker "kubernetes-upgrade-20211117145621-2140" container is missing, will recreate.
	I1117 14:57:38.196941   12552 delete.go:124] DEMOLISHING kubernetes-upgrade-20211117145621-2140 ...
	I1117 14:57:38.197209   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:38.315075   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:57:38.315118   12552 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:38.315140   12552 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:38.315536   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:38.433043   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:38.433087   12552 delete.go:82] Unable to get host status for kubernetes-upgrade-20211117145621-2140, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:38.433184   12552 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117145621-2140
	W1117 14:57:38.548690   12552 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:38.548725   12552 kic.go:360] could not find the container kubernetes-upgrade-20211117145621-2140 to remove it. will try anyways
	I1117 14:57:38.548818   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:38.667461   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	W1117 14:57:38.667505   12552 oci.go:83] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:38.667602   12552 cli_runner.go:115] Run: docker exec --privileged -t kubernetes-upgrade-20211117145621-2140 /bin/bash -c "sudo init 0"
	W1117 14:57:38.782886   12552 cli_runner.go:162] docker exec --privileged -t kubernetes-upgrade-20211117145621-2140 /bin/bash -c "sudo init 0" returned with exit code 1
	I1117 14:57:38.782935   12552 oci.go:658] error shutdown kubernetes-upgrade-20211117145621-2140: docker exec --privileged -t kubernetes-upgrade-20211117145621-2140 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:39.783518   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:39.900850   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:39.900892   12552 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:39.900905   12552 oci.go:672] temporary error: container kubernetes-upgrade-20211117145621-2140 status is  but expect it to be exited
	I1117 14:57:39.900926   12552 retry.go:31] will retry after 462.318748ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:40.369256   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:40.486479   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:40.486520   12552 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:40.486538   12552 oci.go:672] temporary error: container kubernetes-upgrade-20211117145621-2140 status is  but expect it to be exited
	I1117 14:57:40.486563   12552 retry.go:31] will retry after 890.117305ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:41.378652   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:41.498106   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:41.498153   12552 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:41.498162   12552 oci.go:672] temporary error: container kubernetes-upgrade-20211117145621-2140 status is  but expect it to be exited
	I1117 14:57:41.498186   12552 retry.go:31] will retry after 636.341646ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:42.141638   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:42.260539   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:42.260581   12552 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:42.260590   12552 oci.go:672] temporary error: container kubernetes-upgrade-20211117145621-2140 status is  but expect it to be exited
	I1117 14:57:42.260611   12552 retry.go:31] will retry after 1.107876242s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:43.368740   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:43.489343   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:43.489382   12552 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:43.489401   12552 oci.go:672] temporary error: container kubernetes-upgrade-20211117145621-2140 status is  but expect it to be exited
	I1117 14:57:43.489422   12552 retry.go:31] will retry after 1.511079094s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:45.010852   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:45.129371   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:45.129444   12552 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:45.129456   12552 oci.go:672] temporary error: container kubernetes-upgrade-20211117145621-2140 status is  but expect it to be exited
	I1117 14:57:45.129485   12552 retry.go:31] will retry after 3.04096222s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:48.171949   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:48.294821   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:48.294863   12552 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:48.294873   12552 oci.go:672] temporary error: container kubernetes-upgrade-20211117145621-2140 status is  but expect it to be exited
	I1117 14:57:48.294895   12552 retry.go:31] will retry after 5.781953173s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:54.087333   12552 cli_runner.go:115] Run: docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}
	W1117 14:57:54.204703   12552 cli_runner.go:162] docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}} returned with exit code 1
	I1117 14:57:54.204762   12552 oci.go:670] temporary error verifying shutdown: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:54.204772   12552 oci.go:672] temporary error: container kubernetes-upgrade-20211117145621-2140 status is  but expect it to be exited
	I1117 14:57:54.204817   12552 oci.go:87] couldn't shut down kubernetes-upgrade-20211117145621-2140 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	 
	I1117 14:57:54.204907   12552 cli_runner.go:115] Run: docker rm -f -v kubernetes-upgrade-20211117145621-2140
	W1117 14:57:54.320414   12552 cli_runner.go:162] docker rm -f -v kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:54.320542   12552 cli_runner.go:115] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20211117145621-2140
	W1117 14:57:54.438416   12552 cli_runner.go:162] docker container inspect -f {{.Id}} kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:54.438546   12552 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117145621-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:57:54.554829   12552 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117145621-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:57:54.555042   12552 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117145621-2140] to gather additional debugging logs...
	I1117 14:57:54.555060   12552 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117145621-2140
	W1117 14:57:54.667975   12552 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:54.668165   12552 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117145621-2140]: docker network inspect kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:54.668182   12552 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117145621-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W1117 14:57:54.668193   12552 network_create.go:284] Error inspecting docker network kubernetes-upgrade-20211117145621-2140: docker network inspect kubernetes-upgrade-20211117145621-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 14:57:54.668441   12552 delete.go:139] delete failed (probably ok) <nil>
	I1117 14:57:54.668447   12552 fix.go:120] Sleeping 1 second for extra luck!
	I1117 14:57:55.669255   12552 start.go:126] createHost starting for "" (driver="docker")
	I1117 14:57:55.696625   12552 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1117 14:57:55.696723   12552 start.go:160] libmachine.API.Create for "kubernetes-upgrade-20211117145621-2140" (driver="docker")
	I1117 14:57:55.696751   12552 client.go:168] LocalClient.Create starting
	I1117 14:57:55.696856   12552 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/ca.pem
	I1117 14:57:55.696916   12552 main.go:130] libmachine: Decoding PEM data...
	I1117 14:57:55.696928   12552 main.go:130] libmachine: Parsing certificate...
	I1117 14:57:55.696970   12552 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/certs/cert.pem
	I1117 14:57:55.696996   12552 main.go:130] libmachine: Decoding PEM data...
	I1117 14:57:55.697003   12552 main.go:130] libmachine: Parsing certificate...
	I1117 14:57:55.697781   12552 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117145621-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:57:55.812759   12552 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117145621-2140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:57:55.812866   12552 network_create.go:254] running [docker network inspect kubernetes-upgrade-20211117145621-2140] to gather additional debugging logs...
	I1117 14:57:55.812884   12552 cli_runner.go:115] Run: docker network inspect kubernetes-upgrade-20211117145621-2140
	W1117 14:57:55.933827   12552 cli_runner.go:162] docker network inspect kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:55.933856   12552 network_create.go:257] error running [docker network inspect kubernetes-upgrade-20211117145621-2140]: docker network inspect kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:55.933879   12552 network_create.go:259] output of [docker network inspect kubernetes-upgrade-20211117145621-2140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	I1117 14:57:55.933971   12552 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1117 14:57:56.052858   12552 cli_runner.go:162] docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1117 14:57:56.052976   12552 network_create.go:254] running [docker network inspect bridge] to gather additional debugging logs...
	I1117 14:57:56.052997   12552 cli_runner.go:115] Run: docker network inspect bridge
	W1117 14:57:56.167364   12552 cli_runner.go:162] docker network inspect bridge returned with exit code 1
	I1117 14:57:56.167392   12552 network_create.go:257] error running [docker network inspect bridge]: docker network inspect bridge: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:56.167403   12552 network_create.go:259] output of [docker network inspect bridge]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: Bad response from Docker engine
	
	** /stderr **
	W1117 14:57:56.167409   12552 network_create.go:75] failed to get mtu information from the docker's default network "bridge": docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:56.167642   12552 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00013e460] amended:false}} dirty:map[] misses:0}
	I1117 14:57:56.167658   12552 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:57:56.167870   12552 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00013e460] amended:true}} dirty:map[192.168.49.0:0xc00013e460 192.168.58.0:0xc0016963c0] misses:0}
	I1117 14:57:56.167885   12552 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1117 14:57:56.167891   12552 network_create.go:106] attempt to create docker network kubernetes-upgrade-20211117145621-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0 ...
	I1117 14:57:56.167973   12552 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140
	W1117 14:57:56.285442   12552 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	E1117 14:57:56.285480   12552 network_create.go:95] error while trying to create docker network kubernetes-upgrade-20211117145621-2140 192.168.58.0/24: create docker network kubernetes-upgrade-20211117145621-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	W1117 14:57:56.285600   12552 out.go:241] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20211117145621-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create docker network kubernetes-upgrade-20211117145621-2140 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 0: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 14:57:56.285726   12552 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	W1117 14:57:56.420195   12552 cli_runner.go:162] docker ps -a --format {{.Names}} returned with exit code 1
	W1117 14:57:56.420221   12552 kic.go:149] failed to check if container already exists: docker ps -a --format {{.Names}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:56.420314   12552 cli_runner.go:115] Run: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true
	W1117 14:57:56.534138   12552 cli_runner.go:162] docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true returned with exit code 1
	I1117 14:57:56.534180   12552 client.go:171] LocalClient.Create took 837.413736ms
	I1117 14:57:58.534482   12552 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:57:58.534609   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:57:58.653847   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:58.653926   12552 retry.go:31] will retry after 178.565968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:58.834883   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:57:58.954131   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:58.954214   12552 retry.go:31] will retry after 330.246446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:59.294865   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:57:59.411660   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:57:59.411736   12552 retry.go:31] will retry after 460.157723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:57:59.882274   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:58:00.000510   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	W1117 14:58:00.000610   12552 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 14:58:00.000628   12552 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:58:00.000646   12552 start.go:129] duration metric: createHost completed in 4.331310991s
	I1117 14:58:00.000700   12552 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 14:58:00.000767   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:58:00.117534   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:58:00.117623   12552 retry.go:31] will retry after 195.758538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:58:00.314616   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:58:00.429647   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:58:00.429753   12552 retry.go:31] will retry after 297.413196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:58:00.733107   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:58:00.852925   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	I1117 14:58:00.853018   12552 retry.go:31] will retry after 663.23513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:58:01.526700   12552 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140
	W1117 14:58:01.646009   12552 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140 returned with exit code 1
	W1117 14:58:01.646086   12552 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 14:58:01.646101   12552 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-20211117145621-2140": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20211117145621-2140: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	I1117 14:58:01.646121   12552 fix.go:57] fixHost completed within 23.593614957s
	I1117 14:58:01.646130   12552 start.go:80] releasing machines lock for "kubernetes-upgrade-20211117145621-2140", held for 23.593659221s
	W1117 14:58:01.646243   12552 out.go:241] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117145621-2140" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117145621-2140 container: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20211117145621-2140" may fix it: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117145621-2140 container: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	I1117 14:58:01.694859   12552 out.go:176] 
	W1117 14:58:01.695092   12552 out.go:241] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117145621-2140 container: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: creating volume for kubernetes-upgrade-20211117145621-2140 container: docker volume create kubernetes-upgrade-20211117145621-2140 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20211117145621-2140 --label created_by.minikube.sigs.k8s.io=true: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	W1117 14:58:01.695131   12552 out.go:241] * 
	* 
	W1117 14:58:01.696435   12552 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 14:58:01.795521   12552 out.go:176] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211117145621-2140 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker : exit status 80
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211117145621-2140
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211117145621-2140: exit status 82 (14.781220417s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20211117145621-2140"  ...
	* Stopping node "kubernetes-upgrade-20211117145621-2140"  ...
	* Stopping node "kubernetes-upgrade-20211117145621-2140"  ...
	* Stopping node "kubernetes-upgrade-20211117145621-2140"  ...
	* Stopping node "kubernetes-upgrade-20211117145621-2140"  ...
	* Stopping node "kubernetes-upgrade-20211117145621-2140"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-20211117145621-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211117145621-2140 failed: exit status 82
panic.go:642: *** TestKubernetesUpgrade FAILED at 2021-11-17 14:58:16.615093 -0800 PST m=+2094.682119867
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20211117145621-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect kubernetes-upgrade-20211117145621-2140: exit status 1 (114.982595ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20211117145621-2140 -n kubernetes-upgrade-20211117145621-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20211117145621-2140 -n kubernetes-upgrade-20211117145621-2140: exit status 7 (158.12527ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:58:16.887351   13012 status.go:247] status error: host: state: unknown state "kubernetes-upgrade-20211117145621-2140": docker container inspect kubernetes-upgrade-20211117145621-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20211117145621-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20211117145621-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20211117145621-2140
--- FAIL: TestKubernetesUpgrade (116.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (228.64s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3671502253.exe start -p missing-upgrade-20211117145557-2140 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3671502253.exe start -p missing-upgrade-20211117145557-2140 --memory=2200 --driver=docker : exit status 70 (2m26.309363498s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20211117145557-2140] minikube v1.9.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20211117145557-2140
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5945MB available) ...
	* docker "missing-upgrade-20211117145557-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 19.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 41.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 103.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 145.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 213.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 257.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 278.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 301.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 385.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 410.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 434.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 454.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 476.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 531.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create host timed out in 120.000000 seconds
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20211117145557-2140" may fix it.: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20211117145557-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3671502253.exe start -p missing-upgrade-20211117145557-2140 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3671502253.exe start -p missing-upgrade-20211117145557-2140 --memory=2200 --driver=docker : exit status 70 (36.217300378s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20211117145557-2140] minikube v1.9.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20211117145557-2140
	* Pulling base image ...
	* docker "missing-upgrade-20211117145557-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* docker "missing-upgrade-20211117145557-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E1117 14:58:28.273492   13134 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20211117145557-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20211117145557-2140" may fix it.: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20211117145557-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3671502253.exe start -p missing-upgrade-20211117145557-2140 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3671502253.exe start -p missing-upgrade-20211117145557-2140 --memory=2200 --driver=docker : exit status 70 (41.565775317s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20211117145557-2140] minikube v1.9.1 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20211117145557-2140
	* Pulling base image ...
	* docker "missing-upgrade-20211117145557-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* docker "missing-upgrade-20211117145557-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E1117 14:59:06.477853   13465 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20211117145557-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20211117145557-2140" may fix it.: recreate: creating host: create: creating: create kic node: creating volume for missing-upgrade-20211117145557-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:642: *** TestMissingContainerUpgrade FAILED at 2021-11-17 14:59:45.4004 -0800 PST m=+2183.466231771
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20211117145557-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect missing-upgrade-20211117145557-2140: exit status 1 (120.306731ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20211117145557-2140 -n missing-upgrade-20211117145557-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20211117145557-2140 -n missing-upgrade-20211117145557-2140: exit status 7 (203.827718ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:59:45.723613   13838 status.go:247] status error: host: state: unknown state "missing-upgrade-20211117145557-2140": docker container inspect missing-upgrade-20211117145557-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "missing-upgrade-20211117145557-2140" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "missing-upgrade-20211117145557-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20211117145557-2140
--- FAIL: TestMissingContainerUpgrade (228.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (115.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.923888628.exe start -p stopped-upgrade-20211117145817-2140 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.923888628.exe start -p stopped-upgrade-20211117145817-2140 --memory=2200 --vm-driver=docker : exit status 70 (32.483455404s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20211117145817-2140] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2531526962
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "stopped-upgrade-20211117145817-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20211117145817-2140", then "minikube start -p stopped-upgrade-20211117145817-2140 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.81 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 76.88 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.53 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 119.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 142.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 164.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 186.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 237.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 260.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 273.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 338.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 360.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 382.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 426.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 469.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 535.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiBE1117 14:58:26.397915   13047 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.923888628.exe start -p stopped-upgrade-20211117145817-2140 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.923888628.exe start -p stopped-upgrade-20211117145817-2140 --memory=2200 --vm-driver=docker : exit status 70 (42.293166088s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20211117145817-2140] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1867816711
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20211117145817-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "stopped-upgrade-20211117145817-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20211117145817-2140", then "minikube start -p stopped-upgrade-20211117145817-2140 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E1117 14:58:54.925208   13356 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.923888628.exe start -p stopped-upgrade-20211117145817-2140 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.923888628.exe start -p stopped-upgrade-20211117145817-2140 --memory=2200 --vm-driver=docker : exit status 70 (37.663621181s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20211117145817-2140] minikube v1.9.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig653919615
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* docker "stopped-upgrade-20211117145817-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* docker "stopped-upgrade-20211117145817-2140" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (0 available), Memory=2200MB (0MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	  - Run: "minikube delete -p stopped-upgrade-20211117145817-2140", then "minikube start -p stopped-upgrade-20211117145817-2140 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'docker' driver reported an issue: exit status 1
	* Suggestion: Docker is not running or is responding too slow. Try: restarting docker desktop.
	
	E1117 14:59:38.353107   13738 cache.go:114] Error downloading kic artifacts:  error loading image: Error response from daemon: Bad response from Docker engine
	* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: creating volume for stopped-upgrade-20211117145817-2140 container: output Error response from daemon: Bad response from Docker engine
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (115.02s)

                                                
                                    
x
+
TestPause/serial/Start (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20211117145946-2140 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:78: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-20211117145946-2140 --memory=2048 --install-addons=false --wait=all --driver=docker : exit status 69 (457.862519ms)

                                                
                                                
-- stdout --
	* [pause-20211117145946-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
pause_test.go:80: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-20211117145946-2140 --memory=2048 --install-addons=false --wait=all --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (113.83534ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (92.139017ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
--- FAIL: TestPause/serial/Start (0.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (0.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20211117145946-2140 --alsologtostderr -v=1 --driver=docker 
pause_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-20211117145946-2140 --alsologtostderr -v=1 --driver=docker : exit status 69 (430.957777ms)

                                                
                                                
-- stdout --
	* [pause-20211117145946-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:59:47.287382   13892 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:59:47.287513   13892 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:47.287519   13892 out.go:310] Setting ErrFile to fd 2...
	I1117 14:59:47.287522   13892 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:47.287595   13892 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:59:47.287846   13892 out.go:304] Setting JSON to false
	I1117 14:59:47.312607   13892 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3562,"bootTime":1637186425,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:59:47.312692   13892 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:59:47.339885   13892 out.go:176] * [pause-20211117145946-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:59:47.340137   13892 notify.go:174] Checking for updates...
	I1117 14:59:47.387581   13892 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:59:47.413579   13892 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:59:47.440392   13892 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:59:47.466569   13892 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:59:47.467359   13892 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:59:47.467525   13892 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 14:59:47.467585   13892 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 14:59:47.556079   13892 docker.go:108] docker version returned error: exit status 1
	I1117 14:59:47.583051   13892 out.go:176] * Using the docker driver based on user configuration
	I1117 14:59:47.583109   13892 start.go:280] selected driver: docker
	I1117 14:59:47.583124   13892 start.go:775] validating driver "docker" against <nil>
	I1117 14:59:47.583146   13892 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 14:59:47.630827   13892 out.go:176] 
	W1117 14:59:47.631045   13892 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 14:59:47.631140   13892 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 14:59:47.657776   13892 out.go:176] 

                                                
                                                
** /stderr **
pause_test.go:92: failed to second start a running minikube with args: "out/minikube-darwin-amd64 start -p pause-20211117145946-2140 --alsologtostderr -v=1 --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (116.055331ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (94.068972ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (0.64s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20211117145946-2140 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p pause-20211117145946-2140 --alsologtostderr -v=5: exit status 85 (93.812175ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:59:47.929568   13904 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:59:47.930063   13904 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:47.930070   13904 out.go:310] Setting ErrFile to fd 2...
	I1117 14:59:47.930073   13904 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:47.930148   13904 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:59:47.930316   13904 out.go:304] Setting JSON to false
	I1117 14:59:47.930332   13904 mustload.go:65] Loading cluster: pause-20211117145946-2140
	I1117 14:59:47.956746   13904 out.go:176] * Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 14:59:47.982783   13904 out.go:176]   To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-darwin-amd64 pause -p pause-20211117145946-2140 --alsologtostderr -v=5" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (114.531945ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (92.445505ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (113.708114ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (94.452838ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
--- FAIL: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20211117145946-2140 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20211117145946-2140 --output=json --layout=cluster: exit status 85 (40.850609ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6f05eb49-b15f-4459-ad2d-42aadf44a4a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles."}}
	{"specversion":"1.0","id":"2d526dc0-8f12-439e-9113-58c6f0ca1b06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p pause-20211117145946-2140\""}}

                                                
                                                
-- /stdout --
pause_test.go:194: unmarshalling: invalid character '{' after top-level value
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (113.764688ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (93.669356ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
--- FAIL: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:119: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20211117145946-2140 --alsologtostderr -v=5
pause_test.go:119: (dbg) Non-zero exit: out/minikube-darwin-amd64 unpause -p pause-20211117145946-2140 --alsologtostderr -v=5: exit status 85 (95.010661ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:59:48.689156   13921 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:59:48.689622   13921 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:48.689627   13921 out.go:310] Setting ErrFile to fd 2...
	I1117 14:59:48.689630   13921 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:48.689705   13921 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:59:48.689962   13921 mustload.go:65] Loading cluster: pause-20211117145946-2140
	I1117 14:59:48.717141   13921 out.go:176] * Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 14:59:48.743418   13921 out.go:176]   To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
** /stderr **
pause_test.go:121: failed to unpause minikube with args: "out/minikube-darwin-amd64 unpause -p pause-20211117145946-2140 --alsologtostderr -v=5" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Unpause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (117.813706ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (99.984334ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Unpause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (119.423066ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (94.240667ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
--- FAIL: TestPause/serial/Unpause (0.53s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.52s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20211117145946-2140 --alsologtostderr -v=5
pause_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p pause-20211117145946-2140 --alsologtostderr -v=5: exit status 85 (93.558563ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:59:49.217189   13940 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:59:49.217382   13940 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:49.217388   13940 out.go:310] Setting ErrFile to fd 2...
	I1117 14:59:49.217391   13940 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:49.217471   13940 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:59:49.217624   13940 out.go:304] Setting JSON to false
	I1117 14:59:49.217639   13940 mustload.go:65] Loading cluster: pause-20211117145946-2140
	I1117 14:59:49.243635   13940 out.go:176] * Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 14:59:49.269710   13940 out.go:176]   To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
** /stderr **
pause_test.go:110: failed to pause minikube with args: "out/minikube-darwin-amd64 pause -p pause-20211117145946-2140 --alsologtostderr -v=5" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (116.208894ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (92.970832ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/PauseAgain]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (125.742458ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (92.153929ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
--- FAIL: TestPause/serial/PauseAgain (0.52s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:166: (dbg) Run:  docker ps -a
pause_test.go:166: (dbg) Non-zero exit: docker ps -a: exit status 1 (124.37694ms)

                                                
                                                
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
pause_test.go:171: (dbg) Run:  docker volume inspect pause-20211117145946-2140
pause_test.go:171: (dbg) Non-zero exit: docker volume inspect pause-20211117145946-2140: exit status 1 (119.960665ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
pause_test.go:176: (dbg) Run:  sudo docker network ls
pause_test.go:176: (dbg) Non-zero exit: sudo docker network ls: exit status 1 (140.583779ms)

                                                
                                                
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
pause_test.go:178: failed to get list of networks: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (121.336948ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (101.636504ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20211117145946-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect pause-20211117145946-2140: exit status 1 (118.178276ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20211117145946-2140 -n pause-20211117145946-2140: exit status 85 (94.717828ms)

                                                
                                                
-- stdout --
	* Profile "pause-20211117145946-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20211117145946-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20211117145946-2140" host is not running, skipping log retrieval (state="* Profile \"pause-20211117145946-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20211117145946-2140\"")
--- FAIL: TestPause/serial/VerifyDeletedResources (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211117145952-2140 --no-kubernetes --driver=docker 
no_kubernetes_test.go:78: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20211117145952-2140 --no-kubernetes --driver=docker : exit status 69 (454.049487ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211117145952-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
no_kubernetes_test.go:80: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20211117145952-2140 --no-kubernetes --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117145952-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20211117145952-2140: exit status 1 (123.581244ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117145952-2140 -n NoKubernetes-20211117145952-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117145952-2140 -n NoKubernetes-20211117145952-2140: exit status 85 (93.562328ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117145952-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117145952-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117145952-2140" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20211117145952-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20211117145952-2140\"")
--- FAIL: TestNoKubernetes/serial/Start (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:117: expected N/A in the profile list for kubernetes version but got : "out/minikube-darwin-amd64 profile list" : 
-- stdout --
	|-------------------------------------|-----------|---------|----|------|---------|---------|-------|
	|               Profile               | VM Driver | Runtime | IP | Port | Version | Status  | Nodes |
	|-------------------------------------|-----------|---------|----|------|---------|---------|-------|
	| multinode-20211117144058-2140-m01   | docker    | docker  |    | 8443 | v1.22.3 | Unknown |     1 |
	| stopped-upgrade-20211117145817-2140 | docker    | docker  |    | 8443 | v1.18.0 | Unknown |     1 |
	|-------------------------------------|-----------|---------|----|------|---------|---------|-------|

                                                
                                                
-- /stdout --
** stderr ** 
	! Found 1 invalid profile(s) ! 
	* 	 NoKubernetes-20211117145952-2140
	* You can delete them using the following command(s): 
		 $ minikube delete -p NoKubernetes-20211117145952-2140 

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117145952-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20211117145952-2140: exit status 1 (118.275115ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117145952-2140 -n NoKubernetes-20211117145952-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117145952-2140 -n NoKubernetes-20211117145952-2140: exit status 85 (93.602674ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117145952-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117145952-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117145952-2140" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20211117145952-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20211117145952-2140\"")
--- FAIL: TestNoKubernetes/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20211117145952-2140
no_kubernetes_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p NoKubernetes-20211117145952-2140: exit status 85 (93.804688ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117145952-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117145952-2140"

                                                
                                                
-- /stdout --
no_kubernetes_test.go:102: Failed to stop minikube "out/minikube-darwin-amd64 stop -p NoKubernetes-20211117145952-2140" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117145952-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20211117145952-2140: exit status 1 (122.509955ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117145952-2140 -n NoKubernetes-20211117145952-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117145952-2140 -n NoKubernetes-20211117145952-2140: exit status 85 (153.900773ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117145952-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117145952-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117145952-2140" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20211117145952-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20211117145952-2140\"")
--- FAIL: TestNoKubernetes/serial/Stop (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (0.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211117145952-2140 --driver=docker 
no_kubernetes_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20211117145952-2140 --driver=docker : exit status 69 (422.973844ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211117145952-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
no_kubernetes_test.go:135: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-20211117145952-2140 --driver=docker " : exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20211117145952-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect NoKubernetes-20211117145952-2140: exit status 1 (118.803893ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117145952-2140 -n NoKubernetes-20211117145952-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-20211117145952-2140 -n NoKubernetes-20211117145952-2140: exit status 85 (95.159602ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117145952-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117145952-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-20211117145952-2140" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-20211117145952-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-20211117145952-2140\"")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20211117144907-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p auto-20211117144907-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : exit status 69 (459.271929ms)

                                                
                                                
-- stdout --
	* [auto-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:59:55.627022   14152 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:59:55.627155   14152 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:55.627161   14152 out.go:310] Setting ErrFile to fd 2...
	I1117 14:59:55.627164   14152 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:55.627242   14152 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:59:55.627551   14152 out.go:304] Setting JSON to false
	I1117 14:59:55.652449   14152 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3570,"bootTime":1637186425,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:59:55.652545   14152 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:59:55.679721   14152 out.go:176] * [auto-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:59:55.679869   14152 notify.go:174] Checking for updates...
	I1117 14:59:55.727222   14152 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:59:55.753512   14152 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:59:55.779254   14152 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:59:55.805003   14152 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:59:55.805420   14152 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:59:55.805495   14152 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 14:59:55.805528   14152 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 14:59:55.899491   14152 docker.go:108] docker version returned error: exit status 1
	I1117 14:59:55.926540   14152 out.go:176] * Using the docker driver based on user configuration
	I1117 14:59:55.926600   14152 start.go:280] selected driver: docker
	I1117 14:59:55.926613   14152 start.go:775] validating driver "docker" against <nil>
	I1117 14:59:55.926636   14152 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 14:59:55.975131   14152 out.go:176] 
	W1117 14:59:55.975305   14152 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 14:59:55.975400   14152 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 14:59:56.024250   14152 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/auto/Start (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 69 (434.911723ms)

                                                
                                                
-- stdout --
	* [kindnet-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:59:57.065401   14185 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:59:57.065547   14185 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:57.065554   14185 out.go:310] Setting ErrFile to fd 2...
	I1117 14:59:57.065557   14185 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:57.065636   14185 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:59:57.065949   14185 out.go:304] Setting JSON to false
	I1117 14:59:57.090813   14185 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3572,"bootTime":1637186425,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:59:57.090915   14185 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:59:57.119250   14185 out.go:176] * [kindnet-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:59:57.119478   14185 notify.go:174] Checking for updates...
	I1117 14:59:57.167888   14185 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:59:57.193957   14185 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:59:57.219457   14185 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:59:57.245000   14185 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:59:57.245805   14185 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:59:57.245978   14185 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 14:59:57.246041   14185 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 14:59:57.337331   14185 docker.go:108] docker version returned error: exit status 1
	I1117 14:59:57.363567   14185 out.go:176] * Using the docker driver based on user configuration
	I1117 14:59:57.363598   14185 start.go:280] selected driver: docker
	I1117 14:59:57.363609   14185 start.go:775] validating driver "docker" against <nil>
	I1117 14:59:57.363622   14185 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 14:59:57.410542   14185 out.go:176] 
	W1117 14:59:57.410702   14185 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 14:59:57.410802   14185 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 14:59:57.436523   14185 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/kindnet/Start (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : exit status 69 (430.277961ms)

                                                
                                                
-- stdout --
	* [false-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:59:58.390336   14218 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:59:58.390471   14218 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:58.390476   14218 out.go:310] Setting ErrFile to fd 2...
	I1117 14:59:58.390480   14218 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:58.390555   14218 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:59:58.390866   14218 out.go:304] Setting JSON to false
	I1117 14:59:58.415710   14218 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3573,"bootTime":1637186425,"procs":356,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:59:58.415800   14218 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:59:58.442991   14218 out.go:176] * [false-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:59:58.443211   14218 notify.go:174] Checking for updates...
	I1117 14:59:58.491689   14218 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:59:58.517490   14218 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:59:58.543289   14218 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:59:58.569428   14218 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:59:58.569848   14218 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:59:58.569923   14218 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 14:59:58.569954   14218 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 14:59:58.661377   14218 docker.go:108] docker version returned error: exit status 1
	I1117 14:59:58.688160   14218 out.go:176] * Using the docker driver based on user configuration
	I1117 14:59:58.688183   14218 start.go:280] selected driver: docker
	I1117 14:59:58.688195   14218 start.go:775] validating driver "docker" against <nil>
	I1117 14:59:58.688212   14218 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 14:59:58.734965   14218 out.go:176] 
	W1117 14:59:58.735248   14218 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 14:59:58.735323   14218 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 14:59:58.761124   14218 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/false/Start (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20211117144907-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p enable-default-cni-20211117144907-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : exit status 69 (406.989775ms)

                                                
                                                
-- stdout --
	* [enable-default-cni-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:59:59.721075   14253 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:59:59.721211   14253 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:59.721217   14253 out.go:310] Setting ErrFile to fd 2...
	I1117 14:59:59.721220   14253 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:59:59.721293   14253 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:59:59.721594   14253 out.go:304] Setting JSON to false
	I1117 14:59:59.746344   14253 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3574,"bootTime":1637186425,"procs":358,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:59:59.746435   14253 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:59:59.773448   14253 out.go:176] * [enable-default-cni-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:59:59.773584   14253 notify.go:174] Checking for updates...
	I1117 14:59:59.820416   14253 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:59:59.846091   14253 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:59:59.871968   14253 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:59:59.898123   14253 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:59:59.898572   14253 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:59:59.898647   14253 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 14:59:59.898679   14253 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 14:59:59.987493   14253 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:00.014442   14253 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:00.014501   14253 start.go:280] selected driver: docker
	I1117 15:00:00.014545   14253 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:00.014590   14253 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:00.041031   14253 out.go:176] 
	W1117 15:00:00.041303   14253 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:00.041373   14253 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:00.067225   14253 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20211117144907-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p bridge-20211117144907-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : exit status 69 (459.439996ms)

                                                
                                                
-- stdout --
	* [bridge-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:01.016834   14294 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:01.016961   14294 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:01.016966   14294 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:01.016970   14294 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:01.017057   14294 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:01.017364   14294 out.go:304] Setting JSON to false
	I1117 15:00:01.043113   14294 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3576,"bootTime":1637186425,"procs":358,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:01.043240   14294 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:01.070535   14294 out.go:176] * [bridge-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:01.070716   14294 notify.go:174] Checking for updates...
	I1117 15:00:01.119113   14294 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:01.145163   14294 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:01.171046   14294 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:01.196850   14294 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:01.197284   14294 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:01.197363   14294 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:01.197396   14294 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:01.286850   14294 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:01.314046   14294 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:01.314115   14294 start.go:280] selected driver: docker
	I1117 15:00:01.314134   14294 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:01.314157   14294 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:01.362286   14294 out.go:176] 
	W1117 15:00:01.362442   14294 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:01.362503   14294 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:01.415692   14294 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/bridge/Start (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20211117144907-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-20211117144907-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : exit status 69 (436.537779ms)

                                                
                                                
-- stdout --
	* [kubenet-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:02.450827   14345 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:02.450961   14345 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:02.450967   14345 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:02.450970   14345 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:02.451052   14345 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:02.451358   14345 out.go:304] Setting JSON to false
	I1117 15:00:02.476486   14345 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3577,"bootTime":1637186425,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:02.476588   14345 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:02.503747   14345 out.go:176] * [kubenet-20211117144907-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:02.503825   14345 notify.go:174] Checking for updates...
	I1117 15:00:02.550541   14345 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:02.576518   14345 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:02.602403   14345 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:02.628522   14345 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:02.628965   14345 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:02.629039   14345 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:02.629074   14345 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:02.720404   14345 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:02.747573   14345 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:02.747597   14345 start.go:280] selected driver: docker
	I1117 15:00:02.747605   14345 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:02.747620   14345 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:02.800988   14345 out.go:176] 
	W1117 15:00:02.801225   14345 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:02.801323   14345 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:02.826945   14345 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/kubenet/Start (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 69 (428.63617ms)

                                                
                                                
-- stdout --
	* [calico-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:03.784687   14382 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:03.784808   14382 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:03.784814   14382 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:03.784817   14382 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:03.784894   14382 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:03.785206   14382 out.go:304] Setting JSON to false
	I1117 15:00:03.810185   14382 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3578,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:03.810293   14382 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:03.837404   14382 out.go:176] * [calico-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:03.837632   14382 notify.go:174] Checking for updates...
	I1117 15:00:03.885138   14382 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:03.911049   14382 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:03.936682   14382 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:03.963045   14382 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:03.963829   14382 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:03.963956   14382 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:03.964006   14382 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:04.054340   14382 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:04.080374   14382 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:04.080423   14382 start.go:280] selected driver: docker
	I1117 15:00:04.080444   14382 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:04.080490   14382 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:04.127098   14382 out.go:176] 
	W1117 15:00:04.127220   14382 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:04.127261   14382 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:04.153126   14382 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/calico/Start (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cilium-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : exit status 69 (431.158638ms)

                                                
                                                
-- stdout --
	* [cilium-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:05.120683   14423 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:05.120917   14423 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:05.120924   14423 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:05.120927   14423 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:05.120995   14423 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:05.121313   14423 out.go:304] Setting JSON to false
	I1117 15:00:05.146271   14423 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3580,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:05.146370   14423 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:05.173616   14423 out.go:176] * [cilium-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:05.173772   14423 notify.go:174] Checking for updates...
	I1117 15:00:05.221123   14423 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:05.247204   14423 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:05.273017   14423 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:05.299950   14423 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:05.300380   14423 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:05.300453   14423 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:05.300490   14423 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:05.389648   14423 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:05.416491   14423 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:05.416586   14423 start.go:280] selected driver: docker
	I1117 15:00:05.416610   14423 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:05.416642   14423 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:05.464100   14423 out.go:176] 
	W1117 15:00:05.464254   14423 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:05.464307   14423 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:05.490020   14423 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/cilium/Start (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p custom-weave-20211117144908-2140 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : exit status 69 (407.120723ms)

                                                
                                                
-- stdout --
	* [custom-weave-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:06.512863   14460 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:06.513054   14460 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:06.513060   14460 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:06.513063   14460 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:06.513137   14460 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:06.513457   14460 out.go:304] Setting JSON to false
	I1117 15:00:06.538316   14460 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3581,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:06.538411   14460 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:06.565536   14460 out.go:176] * [custom-weave-20211117144908-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:06.565759   14460 notify.go:174] Checking for updates...
	I1117 15:00:06.592357   14460 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:06.618152   14460 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:06.644118   14460 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:06.670015   14460 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:06.670468   14460 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:06.670549   14460 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:06.670578   14460 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:06.758854   14460 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:06.785865   14460 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:06.785902   14460 start.go:280] selected driver: docker
	I1117 15:00:06.785914   14460 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:06.785941   14460 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:06.833361   14460 out.go:176] 
	W1117 15:00:06.833455   14460 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:06.833497   14460 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:06.859302   14460 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 69
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (0.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20211117150007-2140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20211117150007-2140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0: exit status 69 (440.497574ms)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211117150007-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:07.824770   14500 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:07.824961   14500 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:07.824968   14500 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:07.824971   14500 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:07.825054   14500 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:07.825368   14500 out.go:304] Setting JSON to false
	I1117 15:00:07.852288   14500 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3582,"bootTime":1637186425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:07.852401   14500 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:07.879671   14500 out.go:176] * [old-k8s-version-20211117150007-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:07.879886   14500 notify.go:174] Checking for updates...
	I1117 15:00:07.906231   14500 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:07.932263   14500 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:07.958053   14500 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:07.984173   14500 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:07.984617   14500 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:07.984703   14500 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:07.984736   14500 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:08.076169   14500 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:08.103288   14500 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:08.103348   14500 start.go:280] selected driver: docker
	I1117 15:00:08.103362   14500 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:08.103404   14500 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:08.151906   14500 out.go:176] 
	W1117 15:00:08.152128   14500 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:08.152251   14500 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:08.200822   14500 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20211117150007-2140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (130.251216ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (98.414703ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211117150007-2140 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117150007-2140 create -f testdata/busybox.yaml: exit status 1 (41.958061ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117150007-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context old-k8s-version-20211117150007-2140 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (121.06641ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (94.620516ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (119.25106ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (94.643572ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20211117150007-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20211117150007-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (96.800817ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "old-k8s-version-20211117150007-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20211117150007-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20211117150007-2140 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117150007-2140 describe deploy/metrics-server -n kube-system: exit status 1 (38.771423ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117150007-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20211117150007-2140 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (117.518438ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (94.428547ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20211117150007-2140 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p old-k8s-version-20211117150007-2140 --alsologtostderr -v=3: exit status 85 (93.605243ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:09.312019   14547 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:09.312222   14547 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:09.312228   14547 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:09.312231   14547 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:09.312309   14547 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:09.312474   14547 out.go:304] Setting JSON to false
	I1117 15:00:09.312592   14547 mustload.go:65] Loading cluster: old-k8s-version-20211117150007-2140
	I1117 15:00:09.337972   14547 out.go:176] * Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:09.364153   14547 out.go:176]   To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p old-k8s-version-20211117150007-2140 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (121.310014ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (93.831112ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (97.064986ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20211117150007-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20211117150007-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (100.878029ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "old-k8s-version-20211117150007-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20211117150007-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (127.64338ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (94.665618ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20211117150007-2140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20211117150007-2140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0: exit status 69 (426.618721ms)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211117150007-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:10.042893   14568 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:10.043019   14568 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:10.043025   14568 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:10.043028   14568 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:10.043098   14568 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:10.043324   14568 out.go:304] Setting JSON to false
	I1117 15:00:10.068329   14568 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3585,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:10.068429   14568 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:10.095630   14568 out.go:176] * [old-k8s-version-20211117150007-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:10.095860   14568 notify.go:174] Checking for updates...
	I1117 15:00:10.143174   14568 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:10.169316   14568 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:10.195302   14568 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:10.220863   14568 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:10.221279   14568 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:10.221358   14568 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:10.221388   14568 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:10.309053   14568 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:10.334996   14568 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:10.335105   14568 start.go:280] selected driver: docker
	I1117 15:00:10.335118   14568 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:10.335142   14568 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:10.382544   14568 out.go:176] 
	W1117 15:00:10.382741   14568 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:10.382813   14568 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:10.408770   14568 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20211117150007-2140 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.14.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (119.247647ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (93.239264ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20211117150007-2140" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (117.670393ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (94.776946ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-20211117150007-2140" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211117150007-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211117150007-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (42.188899ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20211117150007-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20211117150007-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (124.593401ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (143.106544ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117150007-2140 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117150007-2140 "sudo crictl images -o json": exit status 85 (97.161378ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-20211117150007-2140 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"
start_stop_delete_test.go:289: v1.14.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.3.1",
- 	"k8s.gcr.io/etcd:3.3.10",
- 	"k8s.gcr.io/kube-apiserver:v1.14.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.14.0",
- 	"k8s.gcr.io/kube-proxy:v1.14.0",
- 	"k8s.gcr.io/kube-scheduler:v1.14.0",
- 	"k8s.gcr.io/pause:3.1",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (126.415045ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (98.259232ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20211117150007-2140 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p old-k8s-version-20211117150007-2140 --alsologtostderr -v=1: exit status 85 (93.3206ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:11.529805   14613 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:11.529937   14613 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:11.529943   14613 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:11.529946   14613 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:11.530021   14613 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:11.530183   14613 out.go:304] Setting JSON to false
	I1117 15:00:11.530198   14613 mustload.go:65] Loading cluster: old-k8s-version-20211117150007-2140
	I1117 15:00:11.556635   14613 out.go:176] * Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:11.582463   14613 out.go:176]   To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p old-k8s-version-20211117150007-2140 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (115.221589ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (94.366889ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20211117150007-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect old-k8s-version-20211117150007-2140: exit status 1 (114.171809ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20211117150007-2140 -n old-k8s-version-20211117150007-2140: exit status 85 (130.179091ms)

                                                
                                                
-- stdout --
	* Profile "old-k8s-version-20211117150007-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p old-k8s-version-20211117150007-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "old-k8s-version-20211117150007-2140" host is not running, skipping log retrieval (state="* Profile \"old-k8s-version-20211117150007-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p old-k8s-version-20211117150007-2140\"")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20211117145817-2140
version_upgrade_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p stopped-upgrade-20211117145817-2140: exit status 80 (432.449698ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                 Args                                  |                 Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20211117142648-2140 image save                             | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:08 PST | Wed, 17 Nov 2021 14:30:08 PST |
	|         | gcr.io/google-containers/addon-resizer:functional-20211117142648-2140 |                                          |         |         |                               |                               |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar                       |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140 image rm                               | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:08 PST | Wed, 17 Nov 2021 14:30:08 PST |
	|         | gcr.io/google-containers/addon-resizer:functional-20211117142648-2140 |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140                                        | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:08 PST | Wed, 17 Nov 2021 14:30:08 PST |
	|         | image ls                                                              |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140 image load                             | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:08 PST | Wed, 17 Nov 2021 14:30:08 PST |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar                       |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140                                        | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:08 PST | Wed, 17 Nov 2021 14:30:08 PST |
	|         | image ls                                                              |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140 image save --daemon                    | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:09 PST | Wed, 17 Nov 2021 14:30:09 PST |
	|         | gcr.io/google-containers/addon-resizer:functional-20211117142648-2140 |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140                                        | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:09 PST | Wed, 17 Nov 2021 14:30:10 PST |
	|         | addons list                                                           |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140                                        | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:10 PST | Wed, 17 Nov 2021 14:30:10 PST |
	|         | addons list -o json                                                   |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140                                        | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:39 PST | Wed, 17 Nov 2021 14:30:39 PST |
	|         | version --short                                                       |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140                                        | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:40 PST | Wed, 17 Nov 2021 14:30:40 PST |
	|         | image ls                                                              |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140 image build -t                         | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:40 PST | Wed, 17 Nov 2021 14:30:41 PST |
	|         | localhost/my-image:functional-20211117142648-2140                     |                                          |         |         |                               |                               |
	|         | testdata/build                                                        |                                          |         |         |                               |                               |
	| -p      | functional-20211117142648-2140                                        | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:30:41 PST | Wed, 17 Nov 2021 14:30:41 PST |
	|         | image ls                                                              |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | functional-20211117142648-2140           | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:31:22 PST | Wed, 17 Nov 2021 14:31:26 PST |
	|         | functional-20211117142648-2140                                        |                                          |         |         |                               |                               |
	| -p      | ingress-addon-legacy-20211117143126-2140                              | ingress-addon-legacy-20211117143126-2140 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:32:19 PST | Wed, 17 Nov 2021 14:32:19 PST |
	|         | addons enable ingress-dns                                             |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5                                                |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | ingress-addon-legacy-20211117143126-2140 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:32:20 PST | Wed, 17 Nov 2021 14:32:24 PST |
	|         | ingress-addon-legacy-20211117143126-2140                              |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | json-output-20211117143224-2140          | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:33:24 PST | Wed, 17 Nov 2021 14:33:28 PST |
	|         | json-output-20211117143224-2140                                       |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | json-output-error-20211117143328-2140    | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:33:28 PST | Wed, 17 Nov 2021 14:33:29 PST |
	|         | json-output-error-20211117143328-2140                                 |                                          |         |         |                               |                               |
	| start   | -p                                                                    | docker-network-20211117143329-2140       | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:33:29 PST | Wed, 17 Nov 2021 14:34:59 PST |
	|         | docker-network-20211117143329-2140                                    |                                          |         |         |                               |                               |
	|         | --network=                                                            |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | docker-network-20211117143329-2140       | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:34:59 PST | Wed, 17 Nov 2021 14:35:04 PST |
	|         | docker-network-20211117143329-2140                                    |                                          |         |         |                               |                               |
	| start   | -p                                                                    | docker-network-20211117143504-2140       | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:35:04 PST | Wed, 17 Nov 2021 14:36:18 PST |
	|         | docker-network-20211117143504-2140                                    |                                          |         |         |                               |                               |
	|         | --network=bridge                                                      |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | docker-network-20211117143504-2140       | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:36:18 PST | Wed, 17 Nov 2021 14:36:23 PST |
	|         | docker-network-20211117143504-2140                                    |                                          |         |         |                               |                               |
	| start   | -p                                                                    | existing-network-20211117143628-2140     | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:36:28 PST | Wed, 17 Nov 2021 14:37:43 PST |
	|         | existing-network-20211117143628-2140                                  |                                          |         |         |                               |                               |
	|         | --network=existing-network                                            |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | existing-network-20211117143628-2140     | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:37:43 PST | Wed, 17 Nov 2021 14:37:49 PST |
	|         | existing-network-20211117143628-2140                                  |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | mount-start-1-20211117143749-2140        | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:39:23 PST | Wed, 17 Nov 2021 14:39:30 PST |
	|         | mount-start-1-20211117143749-2140                                     |                                          |         |         |                               |                               |
	|         | --alsologtostderr -v=5                                                |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | mount-start-2-20211117143749-2140        | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:40:53 PST | Wed, 17 Nov 2021 14:40:57 PST |
	|         | mount-start-2-20211117143749-2140                                     |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | mount-start-1-20211117143749-2140        | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:40:57 PST | Wed, 17 Nov 2021 14:40:58 PST |
	|         | mount-start-1-20211117143749-2140                                     |                                          |         |         |                               |                               |
	| profile | list --output json                                                    | minikube                                 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:41:46 PST | Wed, 17 Nov 2021 14:41:46 PST |
	| delete  | -p                                                                    | multinode-20211117144058-2140-m02        | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:46:11 PST | Wed, 17 Nov 2021 14:46:21 PST |
	|         | multinode-20211117144058-2140-m02                                     |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | multinode-20211117144058-2140            | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:46:22 PST | Wed, 17 Nov 2021 14:46:23 PST |
	|         | multinode-20211117144058-2140                                         |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | test-preload-20211117144623-2140         | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:47:09 PST | Wed, 17 Nov 2021 14:47:13 PST |
	|         | test-preload-20211117144623-2140                                      |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | scheduled-stop-20211117144713-2140       | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:47:57 PST | Wed, 17 Nov 2021 14:48:01 PST |
	|         | scheduled-stop-20211117144713-2140                                    |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | skaffold-20211117144801-2140             | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:48:49 PST | Wed, 17 Nov 2021 14:48:54 PST |
	|         | skaffold-20211117144801-2140                                          |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | insufficient-storage-20211117144854-2140 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:49:02 PST | Wed, 17 Nov 2021 14:49:07 PST |
	|         | insufficient-storage-20211117144854-2140                              |                                          |         |         |                               |                               |
	| delete  | -p flannel-20211117144907-2140                                        | flannel-20211117144907-2140              | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:49:07 PST | Wed, 17 Nov 2021 14:49:08 PST |
	| delete  | -p                                                                    | offline-docker-20211117144907-2140       | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:49:54 PST | Wed, 17 Nov 2021 14:50:06 PST |
	|         | offline-docker-20211117144907-2140                                    |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | force-systemd-env-20211117144925-2140    | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:50:10 PST | Wed, 17 Nov 2021 14:50:16 PST |
	|         | force-systemd-env-20211117144925-2140                                 |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | force-systemd-flag-20211117145006-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:50:52 PST | Wed, 17 Nov 2021 14:50:56 PST |
	|         | force-systemd-flag-20211117145006-2140                                |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | docker-flags-20211117145016-2140         | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:51:07 PST | Wed, 17 Nov 2021 14:51:15 PST |
	|         | docker-flags-20211117145016-2140                                      |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | cert-options-20211117145115-2140         | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:52:02 PST | Wed, 17 Nov 2021 14:52:09 PST |
	|         | cert-options-20211117145115-2140                                      |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | cert-expiration-20211117145056-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:55:51 PST | Wed, 17 Nov 2021 14:55:57 PST |
	|         | cert-expiration-20211117145056-2140                                   |                                          |         |         |                               |                               |
	| start   | -p                                                                    | running-upgrade-20211117145209-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:53:39 PST | Wed, 17 Nov 2021 14:56:15 PST |
	|         | running-upgrade-20211117145209-2140                                   |                                          |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                                       |                                          |         |         |                               |                               |
	|         | -v=1 --driver=docker                                                  |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | running-upgrade-20211117145209-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:56:15 PST | Wed, 17 Nov 2021 14:56:21 PST |
	|         | running-upgrade-20211117145209-2140                                   |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | kubernetes-upgrade-20211117145621-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:58:16 PST | Wed, 17 Nov 2021 14:58:17 PST |
	|         | kubernetes-upgrade-20211117145621-2140                                |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | missing-upgrade-20211117145557-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:45 PST | Wed, 17 Nov 2021 14:59:46 PST |
	|         | missing-upgrade-20211117145557-2140                                   |                                          |         |         |                               |                               |
	| delete  | -p pause-20211117145946-2140                                          | pause-20211117145946-2140                | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:49 PST | Wed, 17 Nov 2021 14:59:50 PST |
	|         | --alsologtostderr -v=5                                                |                                          |         |         |                               |                               |
	| profile | list --output json                                                    | minikube                                 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:50 PST | Wed, 17 Nov 2021 14:59:50 PST |
	| delete  | -p pause-20211117145946-2140                                          | pause-20211117145946-2140                | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:51 PST | Wed, 17 Nov 2021 14:59:52 PST |
	| profile | list                                                                  | minikube                                 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:53 PST | Wed, 17 Nov 2021 14:59:53 PST |
	| delete  | -p                                                                    | NoKubernetes-20211117145952-2140         | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:54 PST | Wed, 17 Nov 2021 14:59:55 PST |
	|         | NoKubernetes-20211117145952-2140                                      |                                          |         |         |                               |                               |
	| delete  | -p auto-20211117144907-2140                                           | auto-20211117144907-2140                 | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:56 PST | Wed, 17 Nov 2021 14:59:56 PST |
	| delete  | -p kindnet-20211117144908-2140                                        | kindnet-20211117144908-2140              | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:57 PST | Wed, 17 Nov 2021 14:59:58 PST |
	| delete  | -p false-20211117144908-2140                                          | false-20211117144908-2140                | jenkins | v1.24.0 | Wed, 17 Nov 2021 14:59:59 PST | Wed, 17 Nov 2021 14:59:59 PST |
	| delete  | -p                                                                    | enable-default-cni-20211117144907-2140   | jenkins | v1.24.0 | Wed, 17 Nov 2021 15:00:00 PST | Wed, 17 Nov 2021 15:00:00 PST |
	|         | enable-default-cni-20211117144907-2140                                |                                          |         |         |                               |                               |
	| delete  | -p bridge-20211117144907-2140                                         | bridge-20211117144907-2140               | jenkins | v1.24.0 | Wed, 17 Nov 2021 15:00:01 PST | Wed, 17 Nov 2021 15:00:02 PST |
	| delete  | -p kubenet-20211117144907-2140                                        | kubenet-20211117144907-2140              | jenkins | v1.24.0 | Wed, 17 Nov 2021 15:00:03 PST | Wed, 17 Nov 2021 15:00:03 PST |
	| delete  | -p calico-20211117144908-2140                                         | calico-20211117144908-2140               | jenkins | v1.24.0 | Wed, 17 Nov 2021 15:00:04 PST | Wed, 17 Nov 2021 15:00:05 PST |
	| delete  | -p cilium-20211117144908-2140                                         | cilium-20211117144908-2140               | jenkins | v1.24.0 | Wed, 17 Nov 2021 15:00:05 PST | Wed, 17 Nov 2021 15:00:06 PST |
	| delete  | -p                                                                    | custom-weave-20211117144908-2140         | jenkins | v1.24.0 | Wed, 17 Nov 2021 15:00:07 PST | Wed, 17 Nov 2021 15:00:07 PST |
	|         | custom-weave-20211117144908-2140                                      |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | old-k8s-version-20211117150007-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 15:00:12 PST | Wed, 17 Nov 2021 15:00:12 PST |
	|         | old-k8s-version-20211117150007-2140                                   |                                          |         |         |                               |                               |
	| delete  | -p                                                                    | old-k8s-version-20211117150007-2140      | jenkins | v1.24.0 | Wed, 17 Nov 2021 15:00:12 PST | Wed, 17 Nov 2021 15:00:13 PST |
	|         | old-k8s-version-20211117150007-2140                                   |                                          |         |         |                               |                               |
	|---------|-----------------------------------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 15:00:10
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 15:00:10.042893   14568 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:10.043019   14568 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:10.043025   14568 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:10.043028   14568 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:10.043098   14568 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:10.043324   14568 out.go:304] Setting JSON to false
	I1117 15:00:10.068329   14568 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3585,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:10.068429   14568 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:10.095630   14568 out.go:176] * [old-k8s-version-20211117150007-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:10.095860   14568 notify.go:174] Checking for updates...
	I1117 15:00:10.143174   14568 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:10.169316   14568 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:10.195302   14568 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:10.220863   14568 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:10.221279   14568 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:10.221358   14568 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:10.221388   14568 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:10.309053   14568 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:10.334996   14568 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:10.335105   14568 start.go:280] selected driver: docker
	I1117 15:00:10.335118   14568 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:10.335142   14568 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:10.382544   14568 out.go:176] 
	W1117 15:00:10.382741   14568 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:10.382813   14568 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	
	* 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "stopped-upgrade-20211117145817-2140": docker container inspect stopped-upgrade-20211117145817-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: Bad response from Docker engine
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:215: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20211117150014-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-20211117150014-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 69 (485.067508ms)

                                                
                                                
-- stdout --
	* [no-preload-20211117150014-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:14.406109   14705 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:14.406235   14705 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:14.406241   14705 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:14.406244   14705 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:14.406333   14705 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:14.406654   14705 out.go:304] Setting JSON to false
	I1117 15:00:14.434034   14705 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3589,"bootTime":1637186425,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:14.434121   14705 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:14.481988   14705 out.go:176] * [no-preload-20211117150014-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:14.482125   14705 notify.go:174] Checking for updates...
	I1117 15:00:14.530134   14705 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:14.556198   14705 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:14.582072   14705 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:14.608116   14705 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:14.608673   14705 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:14.608758   14705 config.go:176] Loaded profile config "stopped-upgrade-20211117145817-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I1117 15:00:14.608799   14705 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:14.703949   14705 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:14.731015   14705 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:14.731059   14705 start.go:280] selected driver: docker
	I1117 15:00:14.731099   14705 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:14.731129   14705 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:14.778874   14705 out.go:176] 
	W1117 15:00:14.779253   14705 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:14.779391   14705 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:14.826586   14705 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p no-preload-20211117150014-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (122.514813ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (97.550908ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (0.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211117150014-2140 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context no-preload-20211117150014-2140 create -f testdata/busybox.yaml: exit status 1 (40.2908ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117150014-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context no-preload-20211117150014-2140 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (123.152454ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (154.275484ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (210.711287ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (116.021518ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117150015-2140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117150015-2140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3: exit status 69 (533.244376ms)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211117150015-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:15.436466   14746 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:15.436591   14746 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:15.436598   14746 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:15.436601   14746 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:15.436680   14746 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:15.437011   14746 out.go:304] Setting JSON to false
	I1117 15:00:15.464931   14746 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3590,"bootTime":1637186425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:15.465023   14746 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:15.492428   14746 out.go:176] * [default-k8s-different-port-20211117150015-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:15.492599   14746 notify.go:174] Checking for updates...
	I1117 15:00:15.540821   14746 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:15.566806   14746 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:15.592787   14746 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:15.618642   14746 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:15.619223   14746 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:15.619283   14746 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:15.752108   14746 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:15.777495   14746 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:15.777525   14746 start.go:280] selected driver: docker
	I1117 15:00:15.777536   14746 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:15.777554   14746 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:15.825857   14746 out.go:176] 
	W1117 15:00:15.826092   14746 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:15.826268   14746 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:15.898935   14746 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117150015-2140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (126.555247ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (98.391709ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (0.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20211117150014-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20211117150014-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (232.049077ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "no-preload-20211117150014-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20211117150014-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20211117150014-2140 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context no-preload-20211117150014-2140 describe deploy/metrics-server -n kube-system: exit status 1 (41.969368ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117150014-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context no-preload-20211117150014-2140 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (168.906377ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (101.002255ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211117150015-2140 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117150015-2140 create -f testdata/busybox.yaml: exit status 1 (41.344501ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117150015-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context default-k8s-different-port-20211117150015-2140 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (176.540815ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (98.062415ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (142.635702ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (142.410281ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (0.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20211117150014-2140 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p no-preload-20211117150014-2140 --alsologtostderr -v=3: exit status 85 (99.690912ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:16.304481   14772 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:16.304643   14772 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:16.304651   14772 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:16.304655   14772 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:16.304741   14772 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:16.304938   14772 out.go:304] Setting JSON to false
	I1117 15:00:16.305079   14772 mustload.go:65] Loading cluster: no-preload-20211117150014-2140
	I1117 15:00:16.331532   14772 out.go:176] * Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:16.357690   14772 out.go:176]   To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p no-preload-20211117150014-2140 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (147.382861ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (98.73839ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (143.575822ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20211117150014-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20211117150014-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (126.214626ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "no-preload-20211117150014-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20211117150014-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (145.148796ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (97.617674ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20211117150015-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20211117150015-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (149.381265ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "default-k8s-different-port-20211117150015-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20211117150015-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20211117150015-2140 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117150015-2140 describe deploy/metrics-server -n kube-system: exit status 1 (56.44715ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117150015-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-different-port-20211117150015-2140 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (167.927447ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (118.866125ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20211117150014-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-20211117150014-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 69 (444.685732ms)

                                                
                                                
-- stdout --
	* [no-preload-20211117150014-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:17.163017   14799 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:17.163151   14799 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:17.163157   14799 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:17.163160   14799 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:17.163241   14799 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:17.163471   14799 out.go:304] Setting JSON to false
	I1117 15:00:17.188395   14799 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3592,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:17.188491   14799 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:17.245982   14799 out.go:176] * [no-preload-20211117150014-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:17.246205   14799 notify.go:174] Checking for updates...
	I1117 15:00:17.271705   14799 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:17.298504   14799 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:17.324597   14799 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:17.350594   14799 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:17.351014   14799 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:17.351059   14799 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:17.444377   14799 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:17.470957   14799 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:17.470996   14799 start.go:280] selected driver: docker
	I1117 15:00:17.471008   14799 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:17.471055   14799 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:17.517782   14799 out.go:176] 
	W1117 15:00:17.518084   14799 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:17.518193   14799 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:17.543824   14799 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p no-preload-20211117150014-2140 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (166.01883ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (147.188956ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (0.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117150015-2140 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117150015-2140 --alsologtostderr -v=3: exit status 85 (157.3966ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:17.328692   14804 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:17.350596   14804 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:17.350604   14804 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:17.350609   14804 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:17.350723   14804 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:17.350904   14804 out.go:304] Setting JSON to false
	I1117 15:00:17.351022   14804 mustload.go:65] Loading cluster: default-k8s-different-port-20211117150015-2140
	I1117 15:00:17.376524   14804 out.go:176] * Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:17.402619   14804 out.go:176]   To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p default-k8s-different-port-20211117150015-2140 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (227.001866ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (98.074836ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Stop (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (146.861626ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20211117150015-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20211117150015-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (97.70067ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "default-k8s-different-port-20211117150015-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20211117150015-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (146.04998ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (99.923853ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20211117150014-2140" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (127.782212ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (97.104275ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-20211117150014-2140" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20211117150014-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context no-preload-20211117150014-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (41.630333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-20211117150014-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-20211117150014-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (138.782336ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (181.12431ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117150015-2140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117150015-2140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3: exit status 69 (479.34611ms)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211117150015-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:18.266010   14833 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:18.266204   14833 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:18.266210   14833 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:18.266213   14833 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:18.266288   14833 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:18.266547   14833 out.go:304] Setting JSON to false
	I1117 15:00:18.294625   14833 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3593,"bootTime":1637186425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:18.294728   14833 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:18.321829   14833 out.go:176] * [default-k8s-different-port-20211117150015-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:18.322052   14833 notify.go:174] Checking for updates...
	I1117 15:00:18.347412   14833 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:18.373266   14833 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:18.399658   14833 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:18.451232   14833 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:18.451664   14833 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:18.451705   14833 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:18.579723   14833 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:18.605756   14833 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:18.605810   14833 start.go:280] selected driver: docker
	I1117 15:00:18.605822   14833 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:18.605845   14833 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:18.652219   14833 out.go:176] 
	W1117 15:00:18.652391   14833 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:18.652460   14833 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:18.678705   14833 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-different-port-20211117150015-2140 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.22.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (166.313674ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (149.662526ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20211117150014-2140 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p no-preload-20211117150014-2140 "sudo crictl images -o json": exit status 85 (97.693041ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p no-preload-20211117150014-2140 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (195.230787ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (98.172365ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20211117150014-2140 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p no-preload-20211117150014-2140 --alsologtostderr -v=1: exit status 85 (149.000762ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:18.911141   14853 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:18.911289   14853 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:18.911296   14853 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:18.911299   14853 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:18.911385   14853 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:18.911576   14853 out.go:304] Setting JSON to false
	I1117 15:00:18.911595   14853 mustload.go:65] Loading cluster: no-preload-20211117150014-2140
	I1117 15:00:18.940713   14853 out.go:176] * Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:18.992918   14853 out.go:176]   To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p no-preload-20211117150014-2140 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (118.388537ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (124.07602ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20211117150014-2140

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect no-preload-20211117150014-2140: exit status 1 (121.21167ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20211117150014-2140 -n no-preload-20211117150014-2140: exit status 85 (98.475978ms)

                                                
                                                
-- stdout --
	* Profile "no-preload-20211117150014-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p no-preload-20211117150014-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "no-preload-20211117150014-2140" host is not running, skipping log retrieval (state="* Profile \"no-preload-20211117150014-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p no-preload-20211117150014-2140\"")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20211117150015-2140" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (119.093459ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (141.061749ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20211117150015-2140" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211117150015-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211117150015-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (39.274095ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20211117150015-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20211117150015-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (162.07697ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (93.303346ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117150015-2140 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117150015-2140 "sudo crictl images -o json": exit status 85 (96.814425ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20211117150015-2140 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"
start_stop_delete_test.go:289: v1.22.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.3",
- 	"k8s.gcr.io/kube-proxy:v1.22.3",
- 	"k8s.gcr.io/kube-scheduler:v1.22.3",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (128.952182ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (101.073003ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117150015-2140 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117150015-2140 --alsologtostderr -v=1: exit status 85 (100.029245ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:19.952441   14893 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:19.952702   14893 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:19.952709   14893 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:19.952712   14893 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:19.952795   14893 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:19.952977   14893 out.go:304] Setting JSON to false
	I1117 15:00:19.952996   14893 mustload.go:65] Loading cluster: default-k8s-different-port-20211117150015-2140
	I1117 15:00:19.979794   14893 out.go:176] * Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:20.006691   14893 out.go:176]   To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p default-k8s-different-port-20211117150015-2140 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (123.296507ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (108.077227ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect default-k8s-different-port-20211117150015-2140: exit status 1 (137.080798ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20211117150015-2140 -n default-k8s-different-port-20211117150015-2140: exit status 85 (98.105142ms)

                                                
                                                
-- stdout --
	* Profile "default-k8s-different-port-20211117150015-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20211117150015-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20211117150015-2140" host is not running, skipping log retrieval (state="* Profile \"default-k8s-different-port-20211117150015-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p default-k8s-different-port-20211117150015-2140\"")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (0.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20211117150021-2140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-20211117150021-2140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 69 (471.21671ms)

                                                
                                                
-- stdout --
	* [newest-cni-20211117150021-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:21.209162   14956 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:21.230135   14956 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:21.230160   14956 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:21.230168   14956 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:21.230413   14956 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:21.231113   14956 out.go:304] Setting JSON to false
	I1117 15:00:21.259152   14956 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3596,"bootTime":1637186425,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:21.259289   14956 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:21.284985   14956 out.go:176] * [newest-cni-20211117150021-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:21.285197   14956 notify.go:174] Checking for updates...
	I1117 15:00:21.333149   14956 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:21.359048   14956 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:21.385156   14956 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:21.410907   14956 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:21.411351   14956 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:21.411396   14956 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:21.510539   14956 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:21.537287   14956 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:21.537303   14956 start.go:280] selected driver: docker
	I1117 15:00:21.537313   14956 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:21.537325   14956 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:21.584853   14956 out.go:176] 
	W1117 15:00:21.585102   14956 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:21.585212   14956 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:21.611257   14956 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p newest-cni-20211117150021-2140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117150021-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117150021-2140: exit status 1 (124.221588ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (97.674705ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117150021-2140" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\"")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (0.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20211117150021-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20211117150021-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (120.30562ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "newest-cni-20211117150021-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20211117150021-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117150021-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117150021-2140: exit status 1 (121.434073ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (97.61985ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117150021-2140" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\"")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20211117150021-2140 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p newest-cni-20211117150021-2140 --alsologtostderr -v=3: exit status 85 (97.776967ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:22.238609   14995 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:22.238796   14995 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:22.238803   14995 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:22.238806   14995 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:22.238891   14995 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:22.239089   14995 out.go:304] Setting JSON to false
	I1117 15:00:22.239220   14995 mustload.go:65] Loading cluster: newest-cni-20211117150021-2140
	I1117 15:00:22.265489   14995 out.go:176] * Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:22.291666   14995 out.go:176]   To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p newest-cni-20211117150021-2140 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117150021-2140: exit status 1 (261.229574ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (97.655507ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117150021-2140" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\"")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20211117150022-2140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-20211117150022-2140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3: exit status 69 (506.209804ms)

                                                
                                                
-- stdout --
	* [embed-certs-20211117150022-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:22.372735   15000 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:22.372867   15000 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:22.372873   15000 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:22.372876   15000 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:22.372960   15000 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:22.373267   15000 out.go:304] Setting JSON to false
	I1117 15:00:22.400178   15000 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3597,"bootTime":1637186425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:22.400289   15000 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:22.427262   15000 out.go:176] * [embed-certs-20211117150022-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:22.427451   15000 notify.go:174] Checking for updates...
	I1117 15:00:22.475206   15000 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:22.500867   15000 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:22.527036   15000 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:22.552800   15000 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:22.553908   15000 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:22.554224   15000 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:22.689722   15000 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:22.716211   15000 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:22.716269   15000 start.go:280] selected driver: docker
	I1117 15:00:22.716279   15000 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:22.716301   15000 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:22.763041   15000 out.go:176] 
	W1117 15:00:22.763171   15000 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:22.763247   15000 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:22.815043   15000 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p embed-certs-20211117150022-2140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (173.972336ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (98.399132ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (185.392969ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20211117150021-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20211117150021-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (104.633998ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "newest-cni-20211117150021-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20211117150021-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117150021-2140: exit status 1 (168.763817ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (94.503153ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117150021-2140" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\"")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211117150022-2140 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Non-zero exit: kubectl --context embed-certs-20211117150022-2140 create -f testdata/busybox.yaml: exit status 1 (40.043316ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117150022-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:181: kubectl --context embed-certs-20211117150022-2140 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (124.967163ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (207.986761ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (248.502188ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (95.64388ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20211117150021-2140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-20211117150021-2140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0: exit status 69 (516.111678ms)

                                                
                                                
-- stdout --
	* [newest-cni-20211117150021-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:23.251949   15025 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:23.252100   15025 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:23.252107   15025 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:23.252110   15025 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:23.252200   15025 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:23.252458   15025 out.go:304] Setting JSON to false
	I1117 15:00:23.279642   15025 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3598,"bootTime":1637186425,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:23.279779   15025 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:23.306747   15025 out.go:176] * [newest-cni-20211117150021-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:23.306912   15025 notify.go:174] Checking for updates...
	I1117 15:00:23.353744   15025 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:23.384716   15025 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:23.436417   15025 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:23.483179   15025 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:23.483899   15025 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:23.483956   15025 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:23.580238   15025 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:23.607138   15025 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:23.607238   15025 start.go:280] selected driver: docker
	I1117 15:00:23.607251   15025 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:23.607279   15025 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:23.654768   15025 out.go:176] 
	W1117 15:00:23.654959   15025 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:23.655065   15025 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:23.702790   15025 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p newest-cni-20211117150021-2140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.22.4-rc.0": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117150021-2140: exit status 1 (204.957478ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (101.161997ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117150021-2140" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\"")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20211117150022-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:190: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20211117150022-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (99.910173ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "embed-certs-20211117150022-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:192: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20211117150022-2140 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20211117150022-2140 describe deploy/metrics-server -n kube-system

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:200: (dbg) Non-zero exit: kubectl --context embed-certs-20211117150022-2140 describe deploy/metrics-server -n kube-system: exit status 1 (41.103188ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117150022-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:202: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-20211117150022-2140 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:206: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (166.74566ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (96.264268ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20211117150021-2140 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p newest-cni-20211117150021-2140 "sudo crictl images -o json": exit status 85 (96.612833ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p newest-cni-20211117150021-2140 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"
start_stop_delete_test.go:289: v1.22.4-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-proxy:v1.22.4-rc.0",
- 	"k8s.gcr.io/kube-scheduler:v1.22.4-rc.0",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117150021-2140: exit status 1 (209.715285ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (96.022232ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117150021-2140" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\"")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20211117150022-2140 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p embed-certs-20211117150022-2140 --alsologtostderr -v=3: exit status 85 (96.343742ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:24.275235   15055 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:24.275387   15055 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:24.275394   15055 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:24.275399   15055 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:24.275484   15055 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:24.275660   15055 out.go:304] Setting JSON to false
	I1117 15:00:24.275792   15055 mustload.go:65] Loading cluster: embed-certs-20211117150022-2140
	I1117 15:00:24.301985   15055 out.go:176] * Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:24.327712   15055 out.go:176]   To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p embed-certs-20211117150022-2140 --alsologtostderr -v=3" : exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (211.523646ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (99.021319ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20211117150021-2140 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p newest-cni-20211117150021-2140 --alsologtostderr -v=1: exit status 85 (99.382569ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:24.479012   15061 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:24.479162   15061 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:24.479168   15061 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:24.479171   15061 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:24.479244   15061 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:24.479425   15061 out.go:304] Setting JSON to false
	I1117 15:00:24.479441   15061 mustload.go:65] Loading cluster: newest-cni-20211117150021-2140
	I1117 15:00:24.505766   15061 out.go:176] * Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:24.531766   15061 out.go:176]   To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p newest-cni-20211117150021-2140 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117150021-2140: exit status 1 (145.431066ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (114.643145ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117150021-2140" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20211117150021-2140

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:231: (dbg) Non-zero exit: docker inspect newest-cni-20211117150021-2140: exit status 1 (134.805713ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20211117150021-2140 -n newest-cni-20211117150021-2140: exit status 85 (98.768362ms)

                                                
                                                
-- stdout --
	* Profile "newest-cni-20211117150021-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p newest-cni-20211117150021-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "newest-cni-20211117150021-2140" host is not running, skipping log retrieval (state="* Profile \"newest-cni-20211117150021-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p newest-cni-20211117150021-2140\"")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (125.22279ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 85 (may be ok)
start_stop_delete_test.go:226: expected post-stop host status to be -"Stopped"- but got *"* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\""*
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20211117150022-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:231: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20211117150022-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: exit status 10 (123.501816ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: loading profile: cluster "embed-certs-20211117150022-2140" does not exist
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:233: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20211117150022-2140 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4": exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (150.01115ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (95.845092ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20211117150022-2140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-20211117150022-2140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3: exit status 69 (471.130956ms)

                                                
                                                
-- stdout --
	* [embed-certs-20211117150022-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:25.180688   15085 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:25.180830   15085 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:25.180836   15085 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:25.180839   15085 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:25.180935   15085 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:25.181184   15085 out.go:304] Setting JSON to false
	I1117 15:00:25.210904   15085 start.go:112] hostinfo: {"hostname":"37310.local","uptime":3600,"bootTime":1637186425,"procs":364,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 15:00:25.211011   15085 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 15:00:25.258737   15085 out.go:176] * [embed-certs-20211117150022-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 15:00:25.258832   15085 notify.go:174] Checking for updates...
	I1117 15:00:25.305795   15085 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 15:00:25.331767   15085 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 15:00:25.358039   15085 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 15:00:25.383818   15085 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 15:00:25.385462   15085 config.go:176] Loaded profile config "multinode-20211117144058-2140-m01": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 15:00:25.385508   15085 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 15:00:25.484653   15085 docker.go:108] docker version returned error: exit status 1
	I1117 15:00:25.511267   15085 out.go:176] * Using the docker driver based on user configuration
	I1117 15:00:25.511289   15085 start.go:280] selected driver: docker
	I1117 15:00:25.511295   15085 start.go:775] validating driver "docker" against <nil>
	I1117 15:00:25.511323   15085 start.go:786] status for docker: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Error response from daemon: Bad response from Docker engine Reason:PROVIDER_DOCKER_VERSION_EXIT_1 Fix: Doc:https://minikube.sigs.k8s.io/docs/drivers/docker/}
	I1117 15:00:25.559374   15085 out.go:176] 
	W1117 15:00:25.559602   15085 out.go:241] X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	X Exiting due to PROVIDER_DOCKER_VERSION_EXIT_1: "docker version --format -" exit status 1: Error response from daemon: Bad response from Docker engine
	W1117 15:00:25.559710   15085 out.go:241] * Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/
	I1117 15:00:25.586267   15085 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p embed-certs-20211117150022-2140 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.22.3": exit status 69
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (123.876187ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (101.838485ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (0.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20211117150022-2140" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (127.606227ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (116.205197ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-20211117150022-2140" does not exist
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211117150022-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211117150022-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (40.901397ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-20211117150022-2140" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20211117150022-2140 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (126.8786ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (122.060989ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20211117150022-2140 "sudo crictl images -o json"
start_stop_delete_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p embed-certs-20211117150022-2140 "sudo crictl images -o json": exit status 85 (99.531365ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:289: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p embed-certs-20211117150022-2140 \"sudo crictl images -o json\"": exit status 85
start_stop_delete_test.go:289: failed to decode images json invalid character '*' looking for beginning of value. output:
* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"
start_stop_delete_test.go:289: v1.22.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.4",
- 	"k8s.gcr.io/etcd:3.5.0-0",
- 	"k8s.gcr.io/kube-apiserver:v1.22.3",
- 	"k8s.gcr.io/kube-controller-manager:v1.22.3",
- 	"k8s.gcr.io/kube-proxy:v1.22.3",
- 	"k8s.gcr.io/kube-scheduler:v1.22.3",
- 	"k8s.gcr.io/pause:3.5",
- 	"kubernetesui/dashboard:v2.3.1",
- 	"kubernetesui/metrics-scraper:v1.0.7",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (123.595827ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (98.194105ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20211117150022-2140 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p embed-certs-20211117150022-2140 --alsologtostderr -v=1: exit status 85 (96.135765ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 15:00:26.734029   15148 out.go:297] Setting OutFile to fd 1 ...
	I1117 15:00:26.734171   15148 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:26.734177   15148 out.go:310] Setting ErrFile to fd 2...
	I1117 15:00:26.734180   15148 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 15:00:26.734255   15148 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 15:00:26.734428   15148 out.go:304] Setting JSON to false
	I1117 15:00:26.734443   15148 mustload.go:65] Loading cluster: embed-certs-20211117150022-2140
	I1117 15:00:26.760131   15148 out.go:176] * Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	I1117 15:00:26.786274   15148 out.go:176]   To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
** /stderr **
start_stop_delete_test.go:296: out/minikube-darwin-amd64 pause -p embed-certs-20211117150022-2140 --alsologtostderr -v=1 failed: exit status 85
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (121.462318ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (95.200538ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20211117150022-2140
helpers_test.go:231: (dbg) Non-zero exit: docker inspect embed-certs-20211117150022-2140: exit status 1 (113.171637ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: Bad response from Docker engine

                                                
                                                
** /stderr **
helpers_test.go:233: failed to get docker inspect: exit status 1
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20211117150022-2140 -n embed-certs-20211117150022-2140: exit status 85 (93.559059ms)

                                                
                                                
-- stdout --
	* Profile "embed-certs-20211117150022-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p embed-certs-20211117150022-2140"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "embed-certs-20211117150022-2140" host is not running, skipping log retrieval (state="* Profile \"embed-certs-20211117150022-2140\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p embed-certs-20211117150022-2140\"")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.52s)

                                                
                                    

Test pass (58/236)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 18.49
7 TestDownloadOnly/v1.14.0/kubectl 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.28
10 TestDownloadOnly/v1.22.3/json-events 8.27
11 TestDownloadOnly/v1.22.3/preload-exists 0
14 TestDownloadOnly/v1.22.3/kubectl 0
15 TestDownloadOnly/v1.22.3/LogsDuration 0.28
17 TestDownloadOnly/v1.22.4-rc.0/json-events 8.88
18 TestDownloadOnly/v1.22.4-rc.0/preload-exists 0
21 TestDownloadOnly/v1.22.4-rc.0/kubectl 0
22 TestDownloadOnly/v1.22.4-rc.0/LogsDuration 0.28
23 TestDownloadOnly/DeleteAll 1.12
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.64
25 TestDownloadOnlyKic 9.55
35 TestHyperKitDriverInstallOrUpdate 6.43
39 TestErrorSpam/start 2.48
40 TestErrorSpam/status 0.44
41 TestErrorSpam/pause 0.6
42 TestErrorSpam/unpause 0.78
43 TestErrorSpam/stop 44.27
46 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/CacheCmd/cache/add_local 1.63
68 TestFunctional/parallel/ConfigCmd 0.47
70 TestFunctional/parallel/DryRun 1.31
71 TestFunctional/parallel/InternationalLanguage 0.6
76 TestFunctional/parallel/AddonsCmd 0.27
91 TestFunctional/parallel/Version/short 0.09
95 TestFunctional/parallel/ImageCommands/Setup 4.06
100 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
101 TestFunctional/parallel/ProfileCmd/profile_list 0.36
103 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
110 TestFunctional/parallel/ImageCommands/ImageRemove 0.36
112 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
119 TestFunctional/delete_addon-resizer_images 0.26
120 TestFunctional/delete_my-image_image 0.11
121 TestFunctional/delete_minikube_cached_images 0.11
127 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.2
140 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
146 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
154 TestErrorJSONOutput 0.76
157 TestKicCustomNetwork/use_default_bridge_network 79.29
158 TestKicExistingNetwork 85.32
159 TestMainNoArgs 0.07
166 TestMountStart/serial/DeleteFirst 7.09
195 TestRunningBinaryUpgrade 251.98
210 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.8
211 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.69
212 TestStoppedBinaryUpgrade/Setup 0.78
221 TestPause/serial/DeletePaused 0.69
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.09
236 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.09
282 TestStartStop/group/newest-cni/serial/DeployApp 0
291 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
292 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.14.0/json-events (18.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117142321-2140 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117142321-2140 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker : (18.491064212s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (18.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
--- PASS: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117142321-2140
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117142321-2140: exit status 85 (275.860017ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 14:23:21
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 14:23:21.999899    2150 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:23:22.000051    2150 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:23:22.000057    2150 out.go:310] Setting ErrFile to fd 2...
	I1117 14:23:22.000060    2150 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:23:22.000153    2150 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	W1117 14:23:22.000245    2150 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/config/config.json: no such file or directory
	I1117 14:23:22.000682    2150 out.go:304] Setting JSON to true
	I1117 14:23:22.028869    2150 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1377,"bootTime":1637186425,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:23:22.028966    2150 start.go:120] gopshost.Virtualization returned error: not implemented yet
	W1117 14:23:22.055886    2150 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball: no such file or directory
	I1117 14:23:22.055924    2150 notify.go:174] Checking for updates...
	I1117 14:23:22.083243    2150 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 14:23:22.166699    2150 docker.go:108] docker version returned error: exit status 1
	I1117 14:23:22.192455    2150 start.go:280] selected driver: docker
	I1117 14:23:22.192473    2150 start.go:775] validating driver "docker" against <nil>
	I1117 14:23:22.192625    2150 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:23:22.357589    2150 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:23:22.410331    2150 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:23:22.571155    2150 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:23:22.597852    2150 start_flags.go:268] no existing cluster config was found, will generate one from the flags 
	I1117 14:23:22.651659    2150 start_flags.go:349] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1117 14:23:22.651776    2150 start_flags.go:740] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 14:23:22.651800    2150 cni.go:93] Creating CNI manager for ""
	I1117 14:23:22.651807    2150 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:23:22.651814    2150 start_flags.go:282] config:
	{Name:download-only-20211117142321-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211117142321-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:23:22.677530    2150 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:23:22.703748    2150 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:23:22.703794    2150 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 14:23:22.703974    2150 cache.go:107] acquiring lock: {Name:mk25474a55302fe82d0cdb0c2c63bf43e7e10284 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.704013    2150 cache.go:107] acquiring lock: {Name:mk8e8da154ae7b72edd92dd6e492def651fe4750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.703976    2150 cache.go:107] acquiring lock: {Name:mk7974203566954f6f7e2266d99ce13dcb10f6a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.705090    2150 cache.go:107] acquiring lock: {Name:mk4d4af83b637c66adfcf188633ad7608d78efc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.705266    2150 cache.go:107] acquiring lock: {Name:mk8d6d2c64a8cdcab754406b3d5e20ce5aa8cf9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.705376    2150 cache.go:107] acquiring lock: {Name:mk930552e52d584dc0bc2b55bd9f15b63356d880 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.705422    2150 cache.go:107] acquiring lock: {Name:mk48d20979e281dfd5dc28b289e3f8dd4f46c5fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.705564    2150 cache.go:107] acquiring lock: {Name:mk6ed4774e490d74c36131020ba494e2a67495f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.705922    2150 cache.go:107] acquiring lock: {Name:mk0ad56869afd667573c0e3ed33f07e42f6bd5cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.706067    2150 cache.go:107] acquiring lock: {Name:mkc1e17f143c233926d05dad0939463ba9b1f551 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 14:23:22.706230    2150 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.14.0
	I1117 14:23:22.706250    2150 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.14.0
	I1117 14:23:22.706268    2150 image.go:134] retrieving image: k8s.gcr.io/coredns:1.3.1
	I1117 14:23:22.706287    2150 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.10
	I1117 14:23:22.706308    2150 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I1117 14:23:22.706385    2150 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I1117 14:23:22.706419    2150 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.14.0
	I1117 14:23:22.706420    2150 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/download-only-20211117142321-2140/config.json ...
	I1117 14:23:22.706429    2150 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I1117 14:23:22.706463    2150 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/profiles/download-only-20211117142321-2140/config.json: {Name:mk405edb8057c6e69792764d6d0acaf61fe700e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 14:23:22.706502    2150 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 14:23:22.706512    2150 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.14.0
	I1117 14:23:22.706851    2150 preload.go:132] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I1117 14:23:22.707170    2150 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/linux/v1.14.0/kubectl
	I1117 14:23:22.707171    2150 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/linux/v1.14.0/kubeadm
	I1117 14:23:22.707170    2150 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/linux/v1.14.0/kubelet
	I1117 14:23:22.707864    2150 image.go:176] found k8s.gcr.io/kube-controller-manager:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-controller-manager} tag:v1.14.0 original:k8s.gcr.io/kube-controller-manager:v1.14.0} opener:0xc0002b0000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.707890    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0
	I1117 14:23:22.708062    2150 image.go:176] found k8s.gcr.io/kube-scheduler:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-scheduler} tag:v1.14.0 original:k8s.gcr.io/kube-scheduler:v1.14.0} opener:0xc0001ac1c0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.708082    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0
	I1117 14:23:22.708109    2150 image.go:176] found k8s.gcr.io/kube-apiserver:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-apiserver} tag:v1.14.0 original:k8s.gcr.io/kube-apiserver:v1.14.0} opener:0xc0002e4000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.708131    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0
	I1117 14:23:22.708565    2150 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0" took 4.592234ms
	I1117 14:23:22.708682    2150 image.go:176] found index.docker.io/kubernetesui/metrics-scraper:v1.0.7 locally: &{ref:{Repository:{Registry:{insecure:false registry:index.docker.io} repository:kubernetesui/metrics-scraper} tag:v1.0.7 original:docker.io/kubernetesui/metrics-scraper:v1.0.7} opener:0xc0006d0770 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.708690    2150 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.14.0" took 3.3992ms
	I1117 14:23:22.708701    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I1117 14:23:22.708788    2150 image.go:176] found k8s.gcr.io/kube-proxy:v1.14.0 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:kube-proxy} tag:v1.14.0 original:k8s.gcr.io/kube-proxy:v1.14.0} opener:0xc0001ac380 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.708800    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0
	I1117 14:23:22.708870    2150 image.go:176] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:k8s-minikube/storage-provisioner} tag:v5 original:gcr.io/k8s-minikube/storage-provisioner:v5} opener:0xc0002b0150 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.708899    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I1117 14:23:22.709051    2150 image.go:176] found k8s.gcr.io/pause:3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:pause} tag:3.1 original:k8s.gcr.io/pause:3.1} opener:0xc0000de150 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.709057    2150 image.go:176] found k8s.gcr.io/coredns:1.3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:coredns} tag:1.3.1 original:k8s.gcr.io/coredns:1.3.1} opener:0xc0006d08c0 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.709080    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1
	I1117 14:23:22.709063    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I1117 14:23:22.709334    2150 image.go:176] found index.docker.io/kubernetesui/dashboard:v2.3.1 locally: &{ref:{Repository:{Registry:{insecure:false registry:index.docker.io} repository:kubernetesui/dashboard} tag:v2.3.1 original:docker.io/kubernetesui/dashboard:v2.3.1} opener:0xc0000de230 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.709343    2150 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.14.0" took 5.324845ms
	I1117 14:23:22.709352    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
	I1117 14:23:22.709388    2150 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 5.423076ms
	I1117 14:23:22.709538    2150 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.14.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.0" took 4.082604ms
	I1117 14:23:22.709556    2150 image.go:176] found k8s.gcr.io/etcd:3.3.10 locally: &{ref:{Repository:{Registry:{insecure:false registry:k8s.gcr.io} repository:etcd} tag:3.3.10 original:k8s.gcr.io/etcd:3.3.10} opener:0xc000398000 tarballImage:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I1117 14:23:22.709608    2150 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10
	I1117 14:23:22.709651    2150 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 4.275877ms
	I1117 14:23:22.709731    2150 cache.go:96] cache image "k8s.gcr.io/coredns:1.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/coredns_1.3.1" took 4.489626ms
	I1117 14:23:22.709800    2150 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 5.758424ms
	I1117 14:23:22.709867    2150 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 4.863942ms
	I1117 14:23:22.709958    2150 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.10" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/etcd_3.3.10" took 5.949011ms
	I1117 14:23:22.812686    2150 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 14:23:22.812843    2150 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
	I1117 14:23:22.812926    2150 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
	I1117 14:23:25.756306    2150 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/darwin/v1.14.0/kubectl
	E1117 14:23:26.757454    2150 cache.go:215] Error caching images:  Caching images for kubeadm: caching images: caching image "/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.14.0": write: unable to calculate manifest: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117142321-2140"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/json-events (8.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117142321-2140 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117142321-2140 --force --alsologtostderr --kubernetes-version=v1.22.3 --container-runtime=docker --driver=docker : (8.273650871s)
--- PASS: TestDownloadOnly/v1.22.3/json-events (8.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/preload-exists
--- PASS: TestDownloadOnly/v1.22.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/kubectl
--- PASS: TestDownloadOnly/v1.22.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117142321-2140
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117142321-2140: exit status 85 (274.943809ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 14:23:50
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 14:23:50.635975    2200 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:23:50.636165    2200 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:23:50.636171    2200 out.go:310] Setting ErrFile to fd 2...
	I1117 14:23:50.636174    2200 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:23:50.636249    2200 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	W1117 14:23:50.636326    2200 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/config/config.json: no such file or directory
	I1117 14:23:50.636482    2200 out.go:304] Setting JSON to true
	I1117 14:23:50.661076    2200 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1405,"bootTime":1637186425,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:23:50.661170    2200 start.go:120] gopshost.Virtualization returned error: not implemented yet
	W1117 14:23:50.687931    2200 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball: no such file or directory
	I1117 14:23:50.687994    2200 notify.go:174] Checking for updates...
	I1117 14:23:50.714535    2200 config.go:176] Loaded profile config "download-only-20211117142321-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.14.0
	W1117 14:23:50.714659    2200 start.go:683] api.Load failed for download-only-20211117142321-2140: filestore "download-only-20211117142321-2140": Docker machine "download-only-20211117142321-2140" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 14:23:50.714734    2200 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 14:23:50.714776    2200 start.go:683] api.Load failed for download-only-20211117142321-2140: filestore "download-only-20211117142321-2140": Docker machine "download-only-20211117142321-2140" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 14:23:50.807808    2200 docker.go:132] docker version: linux-20.10.6
	I1117 14:23:50.807922    2200 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:23:50.984336    2200 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2021-11-17 22:23:50.924447632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:23:51.011261    2200 start.go:280] selected driver: docker
	I1117 14:23:51.011292    2200 start.go:775] validating driver "docker" against &{Name:download-only-20211117142321-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20211117142321-2140 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:23:51.011730    2200 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:23:51.186430    2200 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2021-11-17 22:23:51.12677815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:23:51.188538    2200 cni.go:93] Creating CNI manager for ""
	I1117 14:23:51.188556    2200 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:23:51.188571    2200 start_flags.go:282] config:
	{Name:download-only-20211117142321-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:download-only-20211117142321-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:23:51.215801    2200 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:23:51.242362    2200 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:23:51.242368    2200 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:23:51.309683    2200 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	I1117 14:23:51.309705    2200 cache.go:57] Caching tarball of preloaded images
	I1117 14:23:51.309912    2200 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime docker
	I1117 14:23:51.336205    2200 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4 ...
	I1117 14:23:51.377426    2200 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:23:51.377441    2200 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:23:51.433561    2200 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4?checksum=md5:b55c92a19bc9eceed8b554be67ddf54e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117142321-2140"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.3/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/json-events (8.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117142321-2140 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211117142321-2140 --force --alsologtostderr --kubernetes-version=v1.22.4-rc.0 --container-runtime=docker --driver=docker : (8.884373704s)
--- PASS: TestDownloadOnly/v1.22.4-rc.0/json-events (8.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.4-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.22.4-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211117142321-2140
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211117142321-2140: exit status 85 (276.716348ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/11/17 14:23:59
	Running on machine: 37310
	Binary: Built with gc go1.17.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 14:23:59.186090    2229 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:23:59.186282    2229 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:23:59.186289    2229 out.go:310] Setting ErrFile to fd 2...
	I1117 14:23:59.186292    2229 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:23:59.186355    2229 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	W1117 14:23:59.186434    2229 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/config/config.json: no such file or directory
	I1117 14:23:59.186567    2229 out.go:304] Setting JSON to true
	I1117 14:23:59.210934    2229 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1414,"bootTime":1637186425,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:23:59.211036    2229 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:23:59.238099    2229 notify.go:174] Checking for updates...
	I1117 14:23:59.265367    2229 config.go:176] Loaded profile config "download-only-20211117142321-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	W1117 14:23:59.265484    2229 start.go:683] api.Load failed for download-only-20211117142321-2140: filestore "download-only-20211117142321-2140": Docker machine "download-only-20211117142321-2140" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 14:23:59.265556    2229 driver.go:343] Setting default libvirt URI to qemu:///system
	W1117 14:23:59.265602    2229 start.go:683] api.Load failed for download-only-20211117142321-2140: filestore "download-only-20211117142321-2140": Docker machine "download-only-20211117142321-2140" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 14:23:59.361336    2229 docker.go:132] docker version: linux-20.10.6
	I1117 14:23:59.361481    2229 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:23:59.537674    2229 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-11-17 22:23:59.484070387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:23:59.564810    2229 start.go:280] selected driver: docker
	I1117 14:23:59.564834    2229 start.go:775] validating driver "docker" against &{Name:download-only-20211117142321-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:download-only-20211117142321-2140 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:23:59.565258    2229 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:23:59.738413    2229 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:26 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-11-17 22:23:59.684419354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:23:59.740330    2229 cni.go:93] Creating CNI manager for ""
	I1117 14:23:59.740346    2229 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1117 14:23:59.740361    2229 start_flags.go:282] config:
	{Name:download-only-20211117142321-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4-rc.0 ClusterName:download-only-20211117142321-2140 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:23:59.767202    2229 cache.go:118] Beginning downloading kic base image for docker with docker
	I1117 14:23:59.793157    2229 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
	I1117 14:23:59.793165    2229 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 14:23:59.857971    2229 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	I1117 14:23:59.857998    2229 cache.go:57] Caching tarball of preloaded images
	I1117 14:23:59.858216    2229 preload.go:132] Checking if preload exists for k8s version v1.22.4-rc.0 and runtime docker
	I1117 14:23:59.884185    2229 preload.go:238] getting checksum for preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 14:23:59.926092    2229 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
	I1117 14:23:59.926105    2229 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
	I1117 14:23:59.976991    2229 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8bc3d17fd8aad78343e2b84f0cac75d1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v14-v1.22.4-rc.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211117142321-2140"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.4-rc.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:189: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.121569098s)
--- PASS: TestDownloadOnly/DeleteAll (1.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20211117142321-2140
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.64s)

                                                
                                    
x
+
TestDownloadOnlyKic (9.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20211117142410-2140 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:226: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20211117142410-2140 --force --alsologtostderr --driver=docker : (8.010666697s)
helpers_test.go:175: Cleaning up "download-docker-20211117142410-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20211117142410-2140
--- PASS: TestDownloadOnlyKic (9.55s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.43s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.43s)

                                                
                                    
x
+
TestErrorSpam/start (2.48s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 start --dry-run
--- PASS: TestErrorSpam/start (2.48s)

                                                
                                    
x
+
TestErrorSpam/status (0.44s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status: exit status 7 (147.708251ms)

                                                
                                                
-- stdout --
	nospam-20211117142510-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:25:58.722795    2981 status.go:258] status error: host: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	E1117 14:25:58.722803    2981 status.go:261] The "nospam-20211117142510-2140" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status" failed: exit status 7
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status: exit status 7 (148.132437ms)

                                                
                                                
-- stdout --
	nospam-20211117142510-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:25:58.870961    2986 status.go:258] status error: host: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	E1117 14:25:58.870968    2986 status.go:261] The "nospam-20211117142510-2140" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status" failed: exit status 7
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status: exit status 7 (147.881483ms)

                                                
                                                
-- stdout --
	nospam-20211117142510-2140
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 14:25:59.019175    2991 status.go:258] status error: host: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	E1117 14:25:59.019183    2991 status.go:261] The "nospam-20211117142510-2140" host does not exist!

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.44s)

                                                
                                    
x
+
TestErrorSpam/pause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause: exit status 80 (198.113203ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause" failed: exit status 80
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause: exit status 80 (205.818239ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause" failed: exit status 80
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause: exit status 80 (197.918147ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (0.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause: exit status 80 (277.178644ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause" failed: exit status 80
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause: exit status 80 (254.695381ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause" failed: exit status 80
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause: exit status 80 (251.354338ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "nospam-20211117142510-2140": docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (0.78s)

                                                
                                    
x
+
TestErrorSpam/stop (44.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop: exit status 82 (14.753382004s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop" failed: exit status 82
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop
error_spam_test.go:157: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop: exit status 82 (14.765973172s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:159: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop" failed: exit status 82
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop
error_spam_test.go:180: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop: exit status 82 (14.745349451s)

                                                
                                                
-- stdout --
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	* Stopping node "nospam-20211117142510-2140"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect nospam-20211117142510-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: nospam-20211117142510-2140
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:182: "out/minikube-darwin-amd64 -p nospam-20211117142510-2140 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20211117142510-2140 stop" failed: exit status 82
--- PASS: TestErrorSpam/stop (44.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1633: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/files/etc/test/nested/copy/2140/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1014: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20211117142648-2140 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20211117142648-21401769883917
functional_test.go:1026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add minikube-local-cache-test:functional-20211117142648-2140
functional_test.go:1026: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache add minikube-local-cache-test:functional-20211117142648-2140: (1.061224478s)
functional_test.go:1031: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 cache delete minikube-local-cache-test:functional-20211117142648-2140
functional_test.go:1020: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20211117142648-2140
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 config get cpus
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 config get cpus: exit status 14 (43.825304ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 config set cpus 2
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 config unset cpus
functional_test.go:1136: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 config get cpus
functional_test.go:1136: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 config get cpus: exit status 14 (63.763664ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:912: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:912: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (568.369903ms)

                                                
                                                
-- stdout --
	* [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:30:37.476845    4550 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:30:37.476980    4550 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:37.476986    4550 out.go:310] Setting ErrFile to fd 2...
	I1117 14:30:37.476989    4550 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:37.477076    4550 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:30:37.477311    4550 out.go:304] Setting JSON to false
	I1117 14:30:37.502394    4550 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1812,"bootTime":1637186425,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:30:37.502498    4550 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:30:37.530619    4550 out.go:176] * [functional-20211117142648-2140] minikube v1.24.0 on Darwin 11.2.3
	I1117 14:30:37.556029    4550 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:30:37.582249    4550 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:30:37.608181    4550 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:30:37.634070    4550 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:30:37.634784    4550 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:30:37.635807    4550 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:30:37.728299    4550 docker.go:132] docker version: linux-20.10.6
	I1117 14:30:37.728433    4550 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:30:37.905601    4550 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:27 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:30:37.836649378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:30:37.954203    4550 out.go:176] * Using the docker driver based on existing profile
	I1117 14:30:37.954278    4550 start.go:280] selected driver: docker
	I1117 14:30:37.954287    4550 start.go:775] validating driver "docker" against &{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:30:37.954383    4550 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:30:37.978156    4550 out.go:176] 
	W1117 14:30:37.978449    4550 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1117 14:30:38.004362    4550 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:929: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:954: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211117142648-2140 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (599.486754ms)

                                                
                                                
-- stdout --
	* [functional-20211117142648-2140] minikube v1.24.0 sur Darwin 11.2.3
	  - MINIKUBE_LOCATION=12739
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 14:30:10.590216    4416 out.go:297] Setting OutFile to fd 1 ...
	I1117 14:30:10.590574    4416 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:10.590583    4416 out.go:310] Setting ErrFile to fd 2...
	I1117 14:30:10.590589    4416 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1117 14:30:10.591011    4416 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube/bin
	I1117 14:30:10.591271    4416 out.go:304] Setting JSON to false
	I1117 14:30:10.616591    4416 start.go:112] hostinfo: {"hostname":"37310.local","uptime":1785,"bootTime":1637186425,"procs":361,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1117 14:30:10.616687    4416 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1117 14:30:10.644117    4416 out.go:176] * [functional-20211117142648-2140] minikube v1.24.0 sur Darwin 11.2.3
	I1117 14:30:10.697360    4416 out.go:176]   - MINIKUBE_LOCATION=12739
	I1117 14:30:10.723451    4416 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
	I1117 14:30:10.749252    4416 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1117 14:30:10.775369    4416 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube
	I1117 14:30:10.776061    4416 config.go:176] Loaded profile config "functional-20211117142648-2140": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.3
	I1117 14:30:10.776658    4416 driver.go:343] Setting default libvirt URI to qemu:///system
	I1117 14:30:10.871950    4416 docker.go:132] docker version: linux-20.10.6
	I1117 14:30:10.872099    4416 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1117 14:30:11.050282    4416 info.go:263] docker info: {ID:SCH3:KUZR:QX4B:2EMZ:OZBW:JXN5:F6JV:DY72:V44T:W7ID:75XC:JMO5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:27 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2021-11-17 22:30:10.981457993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1117 14:30:11.098597    4416 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I1117 14:30:11.098660    4416 start.go:280] selected driver: docker
	I1117 14:30:11.098672    4416 start.go:775] validating driver "docker" against &{Name:functional-20211117142648-2140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:functional-20211117142648-2140 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
	I1117 14:30:11.098768    4416 start.go:786] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1117 14:30:11.122984    4416 out.go:176] 
	W1117 14:30:11.123226    4416 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1117 14:30:11.148924    4416 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1482: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 addons list
functional_test.go:1494: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2037: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:298: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.939980961s)
functional_test.go:303: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20211117142648-2140
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1213: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1218: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1253: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1258: Took "295.265057ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1267: (dbg) Run:  out/minikube-darwin-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1272: Took "68.966027ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1304: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1309: Took "346.281212ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1317: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1322: Took "102.775056ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20211117142648-2140 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:333: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image rm gcr.io/google-containers/addon-resizer:functional-20211117142648-2140
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:360: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20211117142648-2140
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211117142648-2140
functional_test.go:370: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20211117142648-2140
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20211117142648-2140 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.26s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:184: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20211117142648-2140
--- PASS: TestFunctional/delete_addon-resizer_images (0.26s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:192: (dbg) Run:  docker rmi -f localhost/my-image:functional-20211117142648-2140
--- PASS: TestFunctional/delete_my-image_image (0.11s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:200: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20211117142648-2140
--- PASS: TestFunctional/delete_minikube_cached_images (0.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211117143126-2140 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.20s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20211117143328-2140 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20211117143328-2140 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (119.008787ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e497e876-5fbd-4981-a619-558606cdc758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20211117143328-2140] minikube v1.24.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"105cdf9d-634e-4d12-a64d-bbd52006a0cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"6f302512-5962-462f-9108-536ef1981045","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig"}}
	{"specversion":"1.0","id":"b1760cda-a370-411b-a727-1eb142dc2e4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"5e416ecc-9c4c-4979-ad81-5903ca4a6084","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/.minikube"}}
	{"specversion":"1.0","id":"4a6f4c51-a7e2-4f7e-a58b-614e637138bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20211117143328-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20211117143328-2140
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (79.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20211117143504-2140 --network=bridge
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20211117143504-2140 --network=bridge: (1m13.85879497s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20211117143504-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20211117143504-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20211117143504-2140: (5.307652453s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (79.29s)

                                                
                                    
x
+
TestKicExistingNetwork (85.32s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20211117143628-2140 --network=existing-network
kic_custom_network_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20211117143628-2140 --network=existing-network: (1m15.555528462s)
helpers_test.go:175: Cleaning up "existing-network-20211117143628-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20211117143628-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20211117143628-2140: (5.296877359s)
--- PASS: TestKicExistingNetwork (85.32s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (7.09s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20211117143749-2140 --alsologtostderr -v=5
pause_test.go:130: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20211117143749-2140 --alsologtostderr -v=5: (7.087208947s)
--- PASS: TestMountStart/serial/DeleteFirst (7.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (251.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2204615653.exe start -p running-upgrade-20211117145209-2140 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2204615653.exe start -p running-upgrade-20211117145209-2140 --memory=2200 --vm-driver=docker : (1m29.701810557s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20211117145209-2140 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-20211117145209-2140 --memory=2200 --alsologtostderr -v=1 --driver=docker : (2m35.993152678s)
helpers_test.go:175: Cleaning up "running-upgrade-20211117145209-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20211117145209-2140
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20211117145209-2140: (5.547714179s)
--- PASS: TestRunningBinaryUpgrade (251.98s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.8s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.24.0 on darwin
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current2896277590
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current2896277590/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current2896277590/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current2896277590/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.80s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.69s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.24.0 on darwin
- MINIKUBE_LOCATION=12739
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--12739-993-80e07762e28b592b48b4aeaf3aab89efbbe303e1/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current1700946091
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current1700946091/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current1700946091/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current1700946091/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.69s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20211117145946-2140 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117145952-2140 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117145952-2140 "sudo systemctl is-active --quiet service kubelet": exit status 85 (94.072993ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117145952-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117145952-2140"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117145952-2140 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:89: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20211117145952-2140 "sudo systemctl is-active --quiet service kubelet": exit status 85 (94.810747ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-20211117145952-2140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-20211117145952-2140"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (17/236)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.4-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:491: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20211117142648-2140 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3216915611:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1637188211154385000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3216915611/created-by-test
functional_test_mount_test.go:110: wrote "test-1637188211154385000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3216915611/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1637188211154385000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3216915611/test-1637188211154385000
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (258.114014ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_mount_942a09ee47942f53ef42387527e953a0f5b12ae5_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (215.99373ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (208.036233ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (207.982576ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (209.199454ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (208.748723ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (206.552384ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:126: skipping: mount did not appear, likely because macOS requires prompt to allow non-codesigned binaries to listen on non-localhost port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:93: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo umount -f /mount-9p": exit status 80 (202.41726ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_98b14adcd82ee1c7752a4e4be782b00e25555f68_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:95: "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo umount -f /mount-9p\"": exit status 80
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211117142648-2140 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3216915611:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20211117142648-2140 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3715068291:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (257.762889ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_mount_bed10d13a162750ce65b18354d886a0fc818f49b_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (206.215973ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (208.231082ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (210.719641ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (209.15065ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (203.020128ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (207.261884ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_be71c82dd30a8ca0893f84d6bd9c630c98a5fbba_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:263: skipping: mount did not appear, likely because macOS requires prompt to allow non-codesigned binaries to listen on non-localhost port
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh "sudo umount -f /mount-9p": exit status 80 (204.945876ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "functional-20211117142648-2140": docker container inspect functional-20211117142648-2140 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: functional-20211117142648-2140
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_ssh_98b14adcd82ee1c7752a4e4be782b00e25555f68_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-darwin-amd64 -p functional-20211117142648-2140 ssh \"sudo umount -f /mount-9p\"": exit status 80
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211117142648-2140 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3715068291:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20211117144907-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20211117144907-2140
--- SKIP: TestNetworkPlugins/group/flannel (0.86s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20211117150013-2140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20211117150013-2140
--- SKIP: TestStartStop/group/disable-driver-mounts (0.89s)

                                                
                                    
Copied to clipboard